NFV network function virtualization security considerations

NFV network function virtualization security considerations

I have been asked to write down a few things related to NFV and security. NFV is relatively a new thing in the IT world. It has been on the news in 2012 and since then it has followed the developing path common to the virtualization technologies.

Virtualization has made dramatic improvement in the last years. It all started at first with simple virtualization platforms, of course VMware on top of our mind, but not only. The idea was to abstract HW platforms from software ones.

Developing the idea, the abstraction growth covering multiple hw platforms moving also to the multisite in WAN and geographical development. We call this sort of implementation nowadays cloud, but all the cloud story started from the old virtualization idea.

While this platform change was taking place, the world of services was experimenting different clientserver options (web services and so on).

With the new platforms taking place it was clear the network part would have followed this trend, moving to software and virtual shores.

Form the network point of view the first step has been the SDN (Software Defined Network).

Software defined networks (SDN) allow dynamic changes of network configuration that can alter network function characteristics and behaviors. For example, SDN can render real-time topological changes of a network path. A SDN-enabled network provides a platform on which to implement a dynamic chain of virtualized network services that make up an end-to-end network service.

SDN basically allow to centrally administer, manage, configure network services creating policies that can be related to different needs and able to adapt to a changing environment.

But this level of abstraction was not enough to cover the needed flexibility of the new implementation of modern datacenter, cloud and virtualized environment.

In a SDN environment the network gears remain mainly real solid box in an environment that is way more virtualized.

The first attempt to hybridize the physical network with the virtual one was the introduction of the first virtual network element as switches and firewalls. Those components were sometimes part of the hypervisor of the virtualizing platform, sometimes virtual appliances able to run inside a virtual environment as virtual appliances.

Those solutions were (are, since actually exist) good t target specific needs but were not covering the needed flexibility, resilience and scalability required to modern virtualization systems. Products like VMware’s vShield, Cisco’s ASA 1000v and F5 Networks‘ vCMP brought improvements in management and licensing more suited to service provider needs. Each used different architectures to accomplish those goals, making a blending of approaches difficult. But the lack of a comprehensive approach was making difficult to expand those services extensively.

The natural step of the process of virtualization would have be to define something to address in a more comprehensive way the need to transfer part of the network function inside the virtual environment.

Communications service providers and network operators came together through ETSI to try to address the management issues around virtual appliances that handle network functions.

NFV represents a decoupling of the software implementation of network functions from the underlying hardware by leveraging virtualization techniques. NFV offers a variety of network functions and elements, including routing, content delivery networks, network address translation, virtual private networks (VPNs), load balancing, intrusion detection and prevention systems (IDPS), and firewalls. Multiple network functions can be consolidated into the same hardware or server. NFV allows network operators and users to provision and execute on-demand network functions on commodity hardware or CSP platforms.

NFV does not depend on SDN (and vice-versa) and can be implemented without it. However, SDN can improve performance and enable a rich feature set known as Dynamic Virtual Network Function Service Chaining (or VNF Service Chaining). This capability simplifies and accelerates deployment of NFV-based network functions.

Based on the framework introduced by the European Telecommunications Standards Institute (ETSI), NFV is built on three main domains:

  • VNF,
  • NFV infrastructure, and
  • NFV management and orchestration (MANO).

VNF can be considered as a container of network services provisioned by software, very similar to a VM operational model. The infrastructure part of NFV includes all physical resources (e.g., CPU, memory, and I/O) required for storage, computing and networking to prepare the execution of VNFs. The management of all virtualization-specific tasks in NFV framework is performed by NFV management and orchestration domain. For instance, this domain orchestrates and manages the lifecycle of resources and VNFs, and also controls the automatic remote installation of VNFs.

The resulting environment now is a little bit more complicated than a few years before.

Where in the past we used to have

  • physical servers running Operative Systems as Linux, Unix or Windows bound to the specific hardware platform, and almost monolithic services running on those solutions,
  • physical storage unit running on different technologies and network (Ethernet, iscasi, fiber optic and so on),
  • network connected through physical devices, with some specific unit providing external access (VPN servers)
  • and protected by some sort of security unit providing some sort of control (firewall, IPSIDS, 802.1x, AAA and so on)
  • managed quite independently trough different interfaces or programs

now we moved to a world where we have

a virtualized environment where services (think as an example at Docker implementations) or entire operating systems run on a virtual machines (VMs) that manage the abstraction with the hardware

and is able to allocate resources dynamically in terms of performance and even geographic locations,

a network environment which services are partially virtualized (as in VNF implementation) and partially physical and interact with the virtual environment dynamically

a network configured dynamically through control software (SDN) which can dynamically and easily modify the network topology itself in order to respond to the changing request coming from the environment (users, services, processes).

Nowadays, the impressive effects of network functions virtualization (NFV) are evident in the wide range of applications from IP node implementations (e.g., future Internet architecture) to mobile core networks. NFV allows network functions (e.g., packet forwarding and dropping) to be performed in virtual machines (VMs) in a cloud infrastructure rather than in dedicated devices. NFV as an agile and automated network is desirable for network operators due to the ability of easily developing new services and the capabilities of self-management and network programmability via software-defined networking (SDN). Furthermore, co-existence with current networks and services leads to improve customer experience, and reduces the complexity, capital expenditure (CAPEX), and operational expenditure (OPEX).

In theory, virtualization broadly describes the separation of resources or requests for a service from the underlying physical delivery of that service. In this view, NFV involves the implementation of network functions in software that can run on a range of hardware, which can be moved without the need for installation of new equipment. Therefore, all low-level physical network details are hidden and the users are provided with the dynamic configuration of network tasks.

Everything seems so better and easy, but all those transformation does not come out without a price in terms of security.

Every step into virtualization bring security concerns, related to the control plane (think of hypervisor security, orchestrator security), the communication plane, the virtual environment itself (that often inherit the same problem of the physical platform), and the transition interface between the physical and virtual world.

Despite many advantages, therefore NFV introduces new security challenges. Since all software-based virtual functions in NFV can be configured or controlled by an external entity (e.g., third-party provider or user), the whole network could be potentially compromised or destroyed. For example, in order to properly reduce hosts’ heavy workloads, a hypervisor in NFV can dynamically try to achieve the load-balance of assigned loads for multiple VMs through a flexible and programmable networking layer which is known as virtual switch; however, if the hypervisor is compromised, all network functions can be disabled completely (a good old Ddos) or priority can be provided to some services instead others.

Also, NFV’s attack surface is considerably increased, compared with traditional network systems. Besides network resources (e.g., routers, switches, etc.) in the traditional networks, virtualization environments, live migration, and multi-tenant common infrastructure could also be attacked in NFV. For example, an at- tacker can snare a dedicated virtualized network function (VNF) and then spread out its bots in a victim’s whole network using the migration and multicast ability of NFV. To make matters worse, the access to a common infrastructure for a multi-tenant network based on NFV inherently allows for other security risks due to the shared resources between VMs. For example, in a data center network (DCN), side-channels (e.g., cache-based side channel) attacks and/or operational interference could be introduced unless the shared resources between VMs is securely controlled with proper security policies. In practice, it is not easy to provide a complete isolation of VNFs in DCNs.

The challenge related to secure a VFN are complex because are related to all the element that compose the environment: physical, virtual and control.

According to CSA Securing this environment is challenging for at least the following reasons:

  1. Hypervisor dependencies: Today, only a few hypervisor vendors dominate the marketplace, with many vendors hoping to become market players. Like their operating system vendor counterparts, these vendors must address security vulnerabilities in their code. Diligent patching is critical. These vendors must also understand the underlying architecture, e.g., how packets flow within the network fabric, various types of encryption and so forth.
  2. Elastic network boundaries: In NFV, the network fabric accommodates multiple functions. Placement of physical controls are limited by location and cable length. These boundaries are blurred or non-existent in NFV architecture, which complicates security matters due to the unclear boundaries. VLANs are not traditionally considered secure, so physical segregation may still be required for some purposes.
  3. Dynamic workloads: NFV’s appeal is in its agility and dynamic capabilities. Traditional security models are static and unable to evolve as network topology changes in response to demand. Inserting security services into NFV often involves relying on an overlay model that does not easily coexist across vendor boundaries.
  4. Service insertion: NFV promises elastic, transparent networks since the fabric intelligently routes packets that meet configurable criteria. Traditional security controls are deployed logically and physically inline. With NFV, there is often no simple insertion point for security services that are not already layered into the hypervisor.
  5. Stateful versus stateless inspection: Today’s networks require redundancy at a system level and along a network path. This path redundancy cause asymmetric flows that pose challenges for stateful devices that need to see every packet in order to provide access controls. Security operations during the last decade have been based on the premise that stateful inspection is more advanced and superior to stateless access controls. NFV may add complexity where security controls cannot deal with the asymmetries created by multiple, redundant network paths and devices.
  6. Scalability of available resources: As earlier noted, NFV’s appeal lies in its ability to do more with less data center rack space, power, and cooling.

Dedicating cores to workloads and network resources enables resource consolidation. Deeper inspection technologies—next-generation firewalls and Transport Layer Security (TLS) decryption, for example—are resource intensive and do not always scale without offload capability. Security controls must be pervasive to be effective, and they often require significant compute resources.

Together, SDN and NFV create additional complexity and challenges for security controls. It is not uncommon to couple an SDN model with some method of centralized control to deploy network services in the virtual layer. This approach leverages both SDN and NFV as part of the current trend toward data center consolidation.

NFV Security Framework try to address those problems.

If we want to dig the security part a little deeper we can analyze

  • Network function-specific security issues

and

  • Generic virtualization-related security issues

Network function-specific threats refer to attacks on network functions and/or resources (e.g., spoofing, sniffing and denial of service).

The foundation of NFV is set on network virtualization. In this NFV environment, a single physical infrastructure is logically shared by multiple VNFs. For these VNFs, providing a shared, hosted network infrastructure introduces new security vulnerabilities. The general platform of network virtualization consists of three entities; the providers of the network infrastructure, VNF providers, and users. Since the system consists of different operators, undoubtedly, their cooperation cannot be perfect and each entity may behave in a non-cooperative or greedy way to gain benefits.

The virtualization threats of NFV can be originated by each entity and may target the whole or part of the system.

In this view, we need to consider the threats, such as side-channel or flooding attacks as common attacks, and hypervisor, malware injection or VM migration related attacks as the virtualization and cloud specific attacks.

Basically VNF add a new layer of security concerns to the virtualizedcloud platforms for at least 3 reasons:

  • It inherits all the classic network security issues and expand them to cloud level

This means once a VNF is compromised there are good chances it can spread the attack or problem to the whole environment affecting not only the resources directly assigned but anything connected to the virtual environment. Think, as an example, the level of damage that can be provided performing a Ddos that deplete rapidly all the cloud network resources modifying, as an example, the Qos parameters and not using the traditional flooding techniques (which are anyway available).

  • It depends to several layers of abstraction and controls

Orchestrator and hypervisor are, as a matter of fact, great attack point since can

  • It requires a better planned implementation than the classic physical one,

With a tighter control on who is managing the management interfaces since, in common with SDN, VNF is more exposed to unauthorized access and configuration-related issues.

Still VNF requires studies and analysis from security perspective, the good part is that this is a new technology under development therefore there are big space for improvement.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

NFV network function virtualization security considerations was originally published on The Puchi Herald Magazine

Firewall: Traditional, UTM and NGFW. Understanding the difference

Firewall: Traditional, UTM and NGFW. Understanding the difference

Firewall: Traditional, UTM and NGFW. Understanding the difference

One of the problem nowadays when we talk about firewalls is to understand what actually a firewall is and what means the acronym that are used to define the different type of firewalls.
The common definition today recognizes 3 main types of firewalls:

• Firewalls
• UTM
• NGFW

But what are the differences (if any) between those things?
Let’s start with the very basic: what a firewall is.

Simulação da participação de um Firewall entre...
Simulação da participação de um Firewall entre uma LAN e uma WAN Français : Schéma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

Firewall:

A firewall is software used to maintain the security of a private network. Firewalls block unauthorized access to or from private networks and are often employed to prevent unauthorized Web users or illicit software from gaining access to private networks connected to the Internet. A firewall may be implemented using hardware, software, or a combination of both.
A firewall is recognized as the first line of defense in securing sensitive information. For better safety, the data can be encrypted.
Firewalls generally use two or more of the following methods:

Packet Filtering: Firewalls filter packets that attempt to enter or leave a network and either accept or reject them depending on the predefined set of filter rules.

• Application Gateway: The application gateway technique employs security methods applied to certain applications such as Telnet and File Transfer Protocol servers.

Circuit-Level Gateway: A circuit-level gateway applies these methods when a connection such as Transmission Control Protocol is established and packets start to move.

• Proxy Servers: Proxy servers can mask real network addresses and intercept every message that enters or leaves a network.

Stateful Inspection or Dynamic Packet Filtering: This method compares not just the header information, but also a packet’s most important inbound and outbound data parts. These are then compared to a trusted information database for characteristic matches. This determines whether the information is authorized to cross the firewall into the network.

The limit of the firewall itself is that works only on the protocol side (IPTCPUDP) without knowledge of higher level of risks that can cross the network.

From virus to content filtering there is a hundreds thousands different technologies that can complement firewall works in order to protect our resources.

To address the more complex security environment firewall evolved into something new, that cover different aspect above the simple protocol inspection. Those devices uses different technologies to address different aspect of security in one single box, the so called UTM (Unified Threat Management)

Unified Threat Management (UTM)

Unified threat management (UTM) refers to a specific kind of IT product that combines several key elements of network security to offer a comprehensive security package to buyers.

A unified threat management solution involves combining the utility of a firewall with other guards against unauthorized network traffic along with various filters and network maintenance tools, such as anti-virus programs.

The emergence of unified threat management is a relatively new phenomenon, because the various aspects that make up these products used to be sold separately. However, by selecting a UTM solution, businesses and organization can deal with just one vendor, which may be more efficient. Unified threat management solutions may also promote easier installation and updates for security systems, although others contend that a single point of access and security can be a liability in some cases.

UTM are gaining momentum but have, yet, a lack of understanding of the context and the users, therefore are not the best suit to address the new environments. In order to drive those gap security researchers moved onto upper layer and form protocol moved to applications, where user behavior and context are key.

This moved from UTM to the so called Next Generation Firewall or NGFW

next-generation firewall (NGFW)

A next-generation firewall (NGFW) is a hardware- or software-based network security system that is able to detect and block sophisticated attacks by enforcing security policies at the application level, as well as at the port and protocol level.
Next-generation firewalls integrate three key assets: enterprise firewall capabilities, an intrusion prevention system (IPS) and application control. Like the introduction of stateful inspection in first-generation firewalls, NGFWs bring additional context to the firewall’s decision-making process by providing it with the ability to understand the details of the Web application traffic passing through it and taking action to block traffic that might exploit vulnerabilities

Next-generation firewalls combine the capabilities of traditional firewalls — including packet filtering, network address translation (NAT), URL blocking and virtual private networks (VPNs) — with Quality of Service (QoS) functionality and features not traditionally found in firewall products.

These include intrusion prevention, SSL and SSH inspection, deep-packet inspection and reputation-based malware detection as well as application awareness. The application-specific capabilities are meant to thwart the growing number of application attacks taking place on layers 4-7 of the OSI network stack.

The simple definition of application control is the ability to detect an application based on the application’s content vs. the traditional layer 4 protocol. Since many application providers are moving to a Web-based delivery model, the ability to detect an application based on the content is important while working only at protocol level is almost worthless.

Yet in the market is still not easy to understand what an UTM is and what is a NGFW

UTM vs NGFW

Next-Generation Firewalls were defined by Gartner as a firewall with Application Control, User-Awareness and Intrusion Detection. So basically a NGFW is a firewall that move from creating rules based on IPport to a firewall that create its rules based on User, Application and other parameters.
The difference is, basically, the shift from the old TCPIP protocol model to a new UserApplicationContext one.
On the other end UTM are a mix of technologies that address different security aspect, from antivirus to content filtering, from web security to email security, all upon a firewall. Some of those technologies can require to be configured to recognize users while seldom deal with applications.
In the market the problem is that nowadays traditional firewall does not exist anymore, even in the area of personalhomesoho environment. Most of them are UTM based.

NGUTM

Quite most of the firewall vendors moves from old firewalls to either UTM or NGFW offering, in most of the case NGFW offer also UTM functions while most of the UTM added NGFW application control functions creating, de facto a new generation of product changing the landscape with the introduction of Next Generation UTM

UTM vendors and NGFW vendors keep fighting on what is the best solution in modern environment, but this is a marketing fight more than a technical sound discussion.

The real thing is that UTM and NGFW are becoming more and more the same thing.

NOTE it’s all about rules.

Why security devices become so comprehensive and try to unify such a lot of services? Management is the last piece of the puzzle. In two separate studies, one by Gartner and one by Verizon Data’s Risk Analysis team, it was shown that an overwhelmingly large percentage of security breaches were caused by simple configuration errors. Gartner says “More than 95% of firewall breaches are caused by firewall misconfigurations, not firewall flaws.” Verizon’s estimate is even higher, at 96%. Both agree that the vast majority of our customers’ security problems are caused by implementing security products that are too difficult to use. The answer? Put it all in one place and make it easy to manage. The best security in the world is USELESS unless you can manage it effectively.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Firewall: Traditional, UTM and NGFW. Understanding the difference was originally published on The Puchi Herald Magazine

Pretty Good Privacy (PGP)

Pretty Good Privacy (PGP)

Pretty Good Privacy (PGP)

Pretty Good Privacy or PGP is a popular program used to encrypt and decrypt email over the Internet, as well as authenticate messages with digital signatures and encrypted stored files.
Previously available as freeware and now only available as a low-cost commercial version, PGP was once the most widely used privacy-ensuring program by individuals and is also used by many corporations. It was developed by Philip R. Zimmermann in 1991 and has become a de facto standard for email security.

How PGP works

Pretty Good Privacy uses a variation of the public key system. In this system, each user has an encryption key that is publicly known and a private key that is known only to that user. You encrypt a message you send to someone else using their public key. When they receive it, they decrypt it using their private key. Since encrypting an entire message can be time-consuming, PGP uses a faster encryption algorithm to encrypt the message and then uses the public key to encrypt the shorter key that was used to encrypt the entire message. Both the encrypted message and the short key are sent to the receiver who first uses the receiver’s private key to decrypt the short key and then uses that key to decrypt the message.

PGP comes in two public key versions — Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version, for which PGP must pay a license fee to RSA, uses the IDEA algorithm to generate a short key for the entire message and RSA to encrypt the short key. The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message and the Diffie-Hellman algorithm to encrypt the short key.
When sending digital signatures, PGP uses an efficient algorithm that generates a hash (a mathematical summary) from the user’s name and other signature information. This hash code is then encrypted with the sender’s private key. The receiver uses the sender’s public key to decrypt the hash code. If it matches the hash code sent as the digital signature for the message, the receiver is sure that the message has arrived securely from the stated sender. PGP’s RSA version uses the MD5 algorithm to generate the hash code. PGP’s Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.

Getting PGP

To use Pretty Good Privacy, download or purchase it and install it on your computer system. It typically contains a user interface that works with your customary email program. You may also need to register the public key that your PGP program gives you with a PGP public-key server so that people you exchange messages with will be able to find your public key.

PGP freeware is available for older versions of Windows, Mac, DOS, Unix and other operating systems. In 2010, Symantec Corp. acquired PGP Corp., which held the rights to the PGP code, and soon stopped offering a freeware version of the technology. The vendor currently offers PGP technology in a variety of its encryption products, such as Symantec Encryption Desktop, Symantec Desktop Email Encryption and Symantec Encryption Desktop Storage. Symantec also makes the Symantec Encryption Desktop source code available for peer review.
Though Symantec ended PGP freeware, there are other non-proprietary versions of the technology that are available. OpenPGP is an open source version of PGP that’s supported by the Internet Engineering Task Force (IETF). OpenPGP is used by several software vendors, including as Coviant Software, which offers a free tool for OpenPGP encryption, and HushMail, which offers a Web-based encrypted email service powered by OpenPGP. In addition, the Free Software Foundation developed GNU Privacy Guard (GPG), an OpenPGG-compliant encryption software.

Where can you use PGP?

Pretty Good Privacy can be used to authenticate digital certificates and encrypt/decrypt texts, emails, files, directories and whole disk partitions. Symantec, for example, offers PGP-based products such as Symantec File Share Encryption for encrypting files shared across a network and Symantec Endpoint Encryption for full disk encryption on desktops, mobile devices and removable storage. In the case of using PGP technology for files and drives instead of messages, the Symantec products allows users to decrypt and re-encrypt data via a single sign-on.
Originally, the U.S. government restricted the exportation of PGP technology and even launched a criminal investigation against Zimmermann for putting the technology in the public domain (the investigation was later dropped). Network Associates Inc. (NAI) acquired Zimmermann’s company, PGP Inc., in 1997 and was able to legally publish the source code (NAI later sold the PGP assets and IP to ex-PGP developers that joined together to form PGP Corp. in 2002, which was acquired by Symantec in 2010).
Today, PGP encrypted email can be exchanged with users outside the U.S if you have the correct versions of PGP at both ends.
There are several versions of PGP in use. Add-ons can be purchased that allow backwards compatibility for newer RSA versions with older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each other since they use different algorithms. There are also a number of technology companies that have released tools or services supporting PGP. Google this year introduced an OpenPGP email encryption plug-in for Chrome, while Yahoo also began offering PGP encryption for its email service.

What is an asymmetric algorithm?

Asymmetric algorithms (public key algorithms) use different keys for encryption and decryption, and the decryption key cannot (practically) be derived from the encryption key. Asymmetric algorithms are important because they can be used for transmitting encryption keys or other data securely even when the parties have no opportunity to agree on a secret key in private.
Types of Asymmetric algorithms
Types of Asymmetric algorithms (public key algorithms):
• RSA
• Diffie-Hellman
Digital Signature Algorithm
• ElGamal
• ECDSA
• XTR

Asymmetric algorithms examples:

RSA Asymmetric algorithm
Rivest-Shamir-Adleman is the most commonly used asymmetric algorithm (public key algorithm). It can be used both for encryption and for digital signatures. The security of RSA is generally considered equivalent to factoring, although this has not been proved.
RSA computation occurs with integers modulo n = p * q, for two large secret primes p, q. To encrypt a message m, it is exponentiated with a small public exponent e. For decryption, the recipient of the ciphertext c = me (mod n) computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private key consists of n, p, q, e, d (where p and q can be omitted); the public key contains only n and e. The problem for the attacker is that computing the reverse d of e is assumed to be no easier than factorizing n.
The key size should be greater than 1024 bits for a reasonable level of security. Keys of size, say, 2048 bits should allow security for decades. There are actually multiple incarnations of this algorithm; RC5 is one of the most common in use, and RC6 was a finalist algorithm for AES.

Diffie-Hellman
Diffie-Hellman is the first asymmetric encryption algorithm, invented in 1976, using discrete logarithms in a finite field. Allows two users to exchange a secret key over an insecure medium without any prior secrets.

Diffie-Hellman (DH) is a widely used key exchange algorithm. In many cryptographical protocols, two parties wish to begin communicating. However, let’s assume they do not initially possess any common secret and thus cannot use secret key cryptosystems. The key exchange by Diffie-Hellman protocol remedies this situation by allowing the construction of a common secret key over an insecure communication channel. It is based on a problem related to discrete logarithms, namely the Diffie-Hellman problem. This problem is considered hard, and it is in some instances as hard as the discrete logarithm problem.
The Diffie-Hellman protocol is generally considered to be secure when an appropriate mathematical group is used. In particular, the generator element used in the exponentiations should have a large period (i.e. order). Usually, Diffie-Hellman is not implemented on hardware.

Digital Signature Algorithm
Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Algorithm (DSA), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3]. Digital Signature Algorithm (DSA) is similar to the one used by ElGamal signature algorithm. It is fairly efficient though not as efficient as RSA for signature verification. The standard defines DSS to use the SHA-1 hash function exclusively to compute message digests.
The main problem with DSA is the fixed subgroup size (the order of the generator element), which limits the security to around only 80 bits. Hardware attacks can be menacing to some implementations of DSS. However, it is widely used and accepted as a good algorithm.

ElGamal
The ElGamal is a public key cipher – an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie-Hellman key agreement. ElGamal is the predecessor of DSA.

ECDSA
Elliptic Curve DSA (ECDSA) is a variant of the Digital Signature Algorithm (DSA) which operates on elliptic curve groups. As with Elliptic Curve Cryptography in general, the bit size of the public key believed to be needed for ECDSA is about twice the size of the security level, in bits.

XTR
XTR is an algorithm for asymmetric encryption (public-key encryption). XTR is a novel method that makes use of traces to represent and calculate powers of elements of a subgroup of a finite field. It is based on the primitive underlying the very first public key cryptosystem, the Diffie-Hellman key agreement protocol.
From a security point of view, XTR security relies on the difficulty of solving discrete logarithm related problems in the multiplicative group of a finite field. Some advantages of XTR are its fast key generation (much faster than RSA), small key sizes (much smaller than RSA, comparable with ECC for current security settings), and speed (overall comparable with ECC for current security settings).
Symmetric and asymmetric algorithms
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are their security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt the actual message using a symmetric algorithm. This is sometimes called hybrid encryption

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Pretty Good Privacy (PGP) was originally published on The Puchi Herald Magazine

Dataprivacyasia: Antonio Ieranò at Asia’s premier data protection, privacy and cybersecurity conference. Watch videos

Dataprivacyasia:  Antonio Ieranò at Asia’s premier data protection, privacy and cybersecurity conference. Watch videos

Missed @AntonioIerano at Asia‘s premier #dataprotection, #privacy and #cybersecurity conference? Watch videos

— Data Privacy Asia (@dataprivacyasia) December 10, 2016
from http://twitter.com/dataprivacyasia

//platform.twitter.com/widgets.js

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Dataprivacyasia: Antonio Ieranò at Asia’s premier data protection, privacy and cybersecurity conference. Watch videos was originally published on The Puchi Herald Magazine

unhappy employee are a cyber security concern

unhappy employee are a cyber security concern

English: Planned and published ISA99 work prod...
English: Planned and published ISA99 work products for IACS Security Standards and Technical Report (Approved 2011) (Photo credit: Wikipedia)

Have you ever considered the fact that the “best place to work” is something a security chap should take into serious consideration?

A lot of people keep thinking that security is all about one of that technology, most of those expert master perfectly one of another specific technology and think they have the sacred graal of security.

Since I am not so a big tech expert I am allowed to think that security isn’t in that specific technology, but in a systemic approach where technology cover just one part, and is just a part of a whole process.

One of the aspect that is so often forgotten when we talk about security is that most of the incidents in the security realms comes from mistakes, honest mistake.

A mistake can be due to several reasons:

  • a not clear set of instruction (alas we are still far away from the KISS – Keep It Simple Stupid –  statement, isn’t it?)
  • a not clear process (I have to do what?)
  • lack of knowledge
  • lack of attention (I have too much to do …)
  • lack of committment (Why should I care)

Mostly a composition of all those points.

Uselessly complex processes, esoteric instructions, language for “believer only” are just a part of normal security implementation.

Another  big part is played by lack of understanding, knowledge is not just related to the internal process in place, but should be extended to the basic security elements that too many in the corporate environment (also at the highest levels) just does not understand.

concepts like social engineering, vulnerability, privilege escalation are just tapestry in the CEO office not real understood concepts.

Due to this underestimation of the basic of security it is not a surprise how few attention is given to the relationship between a satisfy employee and a pissed off one.

Why an unhappy employee is a cyber security risk is strictly related to higher level of attention and commitment to cyber security needs of the company. If you are unhappy you will be less prone to listen and understand, and if you sum to this attitude the ridiculously complicated rules that sometimes the company put in place, the result is devastating.

I am not talking about the unhappy employee that willingly want to damage the company, but I am talking about all those that do not care enough to take a proactive approach in security.

Security is, at the very basic, all about your attitude and behaviour. we can cover and patch element through technology and processes, but the user will remain the key point of any security implementation.

It is not a case that social engineering, phishing and other techniques target the users to breach into a company.

Lack of knowledge (therefore lack of training) and unhappiness are the perfect mix to lower employee attention level and give the key to an attacker, even if this is not the employee intention or will.

Let us be clear here, there is not a security technology at the moment that can guarantee 100% security. there is not even a process that can guarantee that kind of security. We are still at the Neanderthal phase of cyber security but now is time to realize that without a holistic approach that take into accounts all the components, people among them, we will lose the battle.

So CSO, CISO and all the security concerned guys should become advocate of employee happiness and employee knowledge, for they our own good.

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

unhappy employee are a cyber security concern was originally published on The Puchi Herald Magazine

Cryptography, keeping on the big lie

Cryptography, keeping on the big lie

So Cryptography would be a National Security Issue?

th (1)I’m tired to be polite and politically correct when talking about encryption. Let us be clear and honest, all those crypto_war is a pile of crap. Every time I heard someone claiming that we should not enforce strong cryptography I wonder: to they have the slightest idea what they are talking about? Probably not, considering also most of the objections against cryptography I heard.

Listening to those “enlighten” minds it seems that without cryptography the world be a sort of heaven where intelligence could have the possibility to solve any criminal case. And it seems that cryptography is used only by the ones who want to act against the laws and the public safety. Well may be would worth for them, and us all, to do a reality check.

Encryption and weapons

encryption-100621667-primary.idgeEncryption is always associated with military technology. The Wassenaar agreement (http://www.wassenaar.org/)  stated what should be considered and not a “sensitive” or military technology. Encryption is in that agreement.

So for someone encryption is a weapon.

encryption has been always used in war context, as well when there were political sensitive issues. Beside the modern math behind encryption, the tools or techniques to hide or make not intelligible a message are old as war and therefore old as humanity.

It seems that the more advance the technology is, the more advance is the need to consider this as a weapon. It is a long story, from traces in Old Egypt Kingdom (1900 BC) to the Caesar Cipher history is plagued by examples of cipher and cryptography more or less successful attempts (it is successful if you do not decrypt the message, of course).

But let us be clear, modern encryption, from Turing to Diffie-Hellman-Merkle is basically math, and math is math. I am sorry but considering math as a military weapon is like considering a hammer a weapon. Can’t be a hammer used to kill someone? yes and directly. Can be math used for the same purpose? wait no … unless the math book is really heavy.

Alas nowadays the math can be implemented into technology, and therefore it is accessible also to the ones that does not have a cryptography degree. But technically speaking, since math is math, anyone could develope a mathematical model to implement cryptography, this would make himher a weapon maker? Actually for some people yes (see all the PGP affair).

Apparently the issue here is the democratization of encryption as something everyone can have access to (bad and good guys) more or less as knifes and hammers and (in some countries) weapons.

Modern technology allow us to implement strong encryption environments, but at the same time rise up the level of “unwanted” decryption capabilities, the faster  our computers are, the more encryption need to go deeper (longer keys, better algorithms…) to be effective. But this is the world we are living.

It is out of doubt that encryption can be used in a war-like scenario, and that can protect communication and sensitive data, but at the same time is clear that those are implementations of something that is of public domain (alas a big defect related to science). You can block the export of those technologies, but can’t avoid a good mathematician design a decent algorithm that supersede your limitation, and some decent coder to implement it.

As a limitation, per se, is not so smart at the end, unless you think you are the only one able to do those things.

Encryption and criminality

th (3)If encryption can be used to protect valuable military information and communication, can be also used by criminals. No question about it. But again we are talking about something that is public domain (math, you know) and encryption, cryptography, communication masquerading have been out there since…ever.

Targeting one tool just would shift the criminals to another tool. Once you make possible to decrypt the internal IPhone infrastructure you think criminal would rely on it? (if ever).

Most of the communication is passed in clear, talking or writing, or sending videos. But at the same time those communication can contain hidden messages even without encryption. As in a baseball game when non formal communication is given between pitcher and catcher on which ball will be the next to be launched, hiding the content of a message disguising it with another is something common. And this does not require encryption and can be as effective as the previous one.

Actually this is the most used vehicle of communication when you want to send a “secret” message or store info. Encryption is just one of the tools that can be used by criminals.

Encryption and intelligence

th (4)So it seems that, anyway, without encryption intelligence work is not possible? this is quite a curious statement mostly because it comes out, mostly, from the same people that declare to collect only “metadata”.

So basically they do mass surveillance (regardless it is legal or not in other countries) to collect only “metadata” but the same are useless against terrorism and criminality? That does not make any sense to me.

It is like the old good intelligence of old times now is useless and we rely only on decryption of messages.

So let us be clear on this. Metadata can gives us a lot of information on a communication transaction and, sometimes, it is all you need if you are doing your intelligence work with intelligence (nice joke,isn’t it? lol)

If you have two suspects, and those ar starting to exchange encrypted messages, well, you have good reason to make your surveillance more stronger.

But if you do not have suspects? well the answer is decrypt all messages from anyone and look inside the content to find out if this is terrorist related.

Is this effective? may be, I do not question it, is this respectful of privacy? no it is not. Would be like preventing criminality bringing everyone in front of a court, I mean every citizen, may be at the end you will find even some criminal, but the most will be innocent people brought in front of a court.

So all the point here is that withouth intelligence opening a Pandora box with bad encryption (as the export grade restrictions that are sill harming our digital world) is, at least, questionable.

Can this makes law enforcement and defence agencies work more hard? yes and not. If this crypto-war is made to cover inefficiency in the intelligence capabilities of those agencies is for sure a problem.

Unless the point is to substanciate that only mass surveillance activity can save us all. But it is funny, mass murdered killers post their statements on Facebook (in clear) and we does not notice it, and at the same time we keep talking about encryption?

It is just me that sees a odd situation or ….

Encryption and the internet

th (5)We all know what HTTPS is, or we should, at least. We all know what TLS/SSL is, or we should at least. We all know what PKI is, or we should at least.

Internet technology rely heavily on encryption, since encryption  is one of the basic pillar of: security; authentication; authorization  and non repudiation technologies. Withouth encryption all those mentioned things could not be effective on the internet where there is not a direct and visible contact between the counterparts.

A system is as secure as its weakest component, therefore weakening encryption is damaging all the internet.

Let’s be clear again, encryption is not the only answer. When me make a VPN (HTTPS, SSL, TLS, IPsec…) we are fairly sure that what we put at the beginning of the transmission pipe is what will arrive at the end of the pipe. But encryption can do a little on the content of the transmission itself, so if we put manure at one side of the pipe we will receive manure at the other side, this is why encryption is just one of the needed technology to be implemented.

But I do not think anyone doubt that without encryption most, if not all, the achievement of modern internet economy would have not been possible, or you would like to pass your credit card data in plain text? (well actually is what you do when someone swipe your card on a card reader, but this is another story.)

Encryption and privacy

One of the most important encryption value, those days, is to preserve privacy and intellectual property. With the expanding exposure of our life to the digital world, and the promise of the IoT (Internet of Terror–sorry , my mistake, Internet of Things) encryption is becoming, day by day, the tool to preserve our privacy.

Basically one time we would have counted on the privacy of our walls, and till we do. But our world has expanded dramatically, and will expand way more in the future.

Being entitled to some privacy is a right, and in some countries (as EU) it is considered one of the  fundamental human rights. Alas in the digital world only encryption can take the job of our walls. Weakening encryption means make your home with transparent walls. May be you like it may be not. But I wonder why this glass house concept has never been presented as a mandatory security tool from enforcement agencies.

This will make easier to look for fugitives, stolen merchandize, drugs and so on…

Encryption and “backdoors”

th (6)This is only for this phone.. yea right…

“I am sorry, I swear I’ll never do it again..”, ow many time parents have listen those words from kids? We do not believe them, of course, we know they will do it again untill the lesson will be learned.

I seems that the same approach does not work with grownups. They do not learn even in front of evidence.

The point it seems not to be understood by some people is that there is not only one owner of knowledge outside there. I tries to explain before that modern encryption is based on math, and math is public domain stuffs. This means basically that anyone with enough knowledge can work to build or harm encryption systems.

When you plan to put a “backdoor” (or better weaken the way a key is generated, to make it guessable) to access some data, it is just a matter of time that someone else will find the weakness. Only an idiot can think heshe is the only owner of those kind of technologies.

Chryptologists and security experts worldwide think the same, recent examples of vulnerabilities related to “export grade encryption crap technology” prove this point, but this seems not clear yet to someone.

Like climate change issues (and why not, creationism), political believes are incredibly blind to simple facts: it will not work.

It is not that security experts and cryptologists does not care about security, or does not care about terrorism and criminality. On the contrary, they care a lot. But they are forced to have a vision that is not shortsighted by contingency. If you do it today someone else will do it tomorrow, it is simple as at. There is no way to stop researchers to look for vulnerability; they can be good or bad, they can be trustful or not, but they will do it, you like it or not.

Encryption and trust

But the question on encryption is way more deeper and complicated. There is a problem every time you make a system weaker: you lose trust and create a precedent.

As in San Bernardino case, there is no way to guarantee someone else will not ask to access another phone, and another and so on.

Beside this, it is clear that once you do this for one phone, you will be forced to do this for other phones. And then there is the cloud and IoT there waiting for those requests.

We should face two issues:

  1. encryption is something used to preserve data confidentiality, integrity and transmission. How can you trust a system that is openly weak?
  2. how can we trust the controller?

I tried to clearly express my view on point one before, if you weak a part you weak it all. Basically it makes the whole system untrastable, and since we states trust is the paramount for security, weaken it will simply shift the use on other tools. It will not a problem for terrorist to use self made encryption tools, that may be make the message look as a plain text …

But I would like to focus on the second point.

Can I, as european, trust a system that can be penetrated by USA intelligence without my knowledge? I am not talking just form a personal perspective, but also from a government one.

The answer is obviously no. even if we are ally. And the reason is in documents and facts that show how even allies have their skeletons. Snowden (and some other reports before him actually) just make public somethings we were all aware of, but just too focused on denial to take position.

We live in an interconnected world, and we can not think what we do is without consequences on global scale. sure we can choose to not care, or not talk about it, but consequences will be hitting us we like it or not.

Once a nation ask for a weaken encryption for “security” reason, there is no guarantee it will not use it also for other purposes. This means that export grade restrictions, now that the world care and is aware of the problem, or “backdoors” and similar things will rise up a similar answer from the other countries. It is quite amusing to notice that what is a “security matter” for a country can be perceived as a violation from another. Of course we are the good ones, God is with us (Jeez this remember me something, may be in another language) therefore they are the bed guys, isn’t it? So we can be trusted they can’t…or may be we can not trust anyone and so consider the encryption a defence tool from anyone?

I know a balance is hard to be found between privacy and security, but if trust is mined you just will not have more security, because bed guys always knows how t protect their stuffs.

Encryption and business

th (7)So, would you buy, or trust for what it matter, something with a clipper chip on it? Seriously? If you do not care about security and privacy probably yes, if you care obviously not.

So vendors, technology and services provider should hae to make a double offer: with weakened security or not. May be offering hard discount for the weakened security version of the product. I can Imagine the motto:

“Be insecure for your security”

 

 

 

 

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

Cryptography, keeping on the big lie was originally published on The Puchi Herald Magazine

The IoT Files: The need for cryptography

The IoT Files: The need for cryptography

The IoT Files: The need for cryptography

AAEAAQAAAAAAAASVAAAAJDM3NzViYTkwLWM3NmEtNDAzZC1iNDczLTU0NTJjZWI1ZTZiMw

One of the main arguments that should be touched by IoT discussion is cryptography. There is an undisputed consensus that cryptography is a mandatory requirement to preserve security and privacy in the IoT world, but we are far away for a general consensus on how to operate.

The need for cryptography in IoT comes from two main aspects:

The first need is clear; encryption is a mandatory requirement when we want to implement any form of authentication and non repudiation. Encryption is widely used even if we don’t know we are using it. PKI, sign in certificates are just some example.

Whenever we want to transmit something, encryption comes in hand to be sure what we transmit is not seen by 3rd party and not tampered.

Whenever we store something encryption comes handy when we need to preserve the access to those data, even at a local level.

Regarding Data privacy, it is a way more strong call for encryption, a wide use of it. As a system IoT allow a multitude of devices to exchange data that can become sensitive and private. Without a clear understanding of this point there can be misinterpretation. In IoT the amount of data and metadata will be way bigger than the already impressive amount of data we deliver on the wild nowadays. So basically a more cautious approach to data privacy will be needed and embedded into the very essence of IoT, therefore encryption will be a mandatory requirement.

But encryption is not an easy area, and I am not talking about implementation (which can e easily achieved) but for the need and use of this technology.

A little check on the actual status

Cryptography is not only a technical or business argument (cost vs performance vs security) but, mainly, a political issue.

The history of cryptography has been doomed by constant attempts to block, or control, the use of good secure cryptography tools in the civil environment. It is not a mystery nowadays we have a lot of discussion upon cryptography and backdoors (although the term “backdoors” is misleading and misused most of the time).

The USA has, as an example, a good and long history fighting against civil cryptographic tools both in the past, may be someone remember the PGP affair, and in nowadays events, think of apple case as a clear example.

Every time we lower the level of security for some reason, we have to expect sooner or later someone else will leverage and use it for purpose not intended by the regulator. Recent history is full of those examples; some of the actions performed against cryptographic tools are on the news every day. We tend to call them vulnerability (SSLTLS vulnerability like freak  …) but let us be clear on what they actually are: the consequences of export grade restriction on cryptography.

There are a lot of laws and regulation related to the use, import and export of cryptography, here some examples:

This section gives a very brief description of the cryptographic policies in twelve countries. We emphasize that the laws and regulations are continuously changing, and the information given here is not necessarily complete or accurate. For example, export regulations in several countries are likely to change in the near future in accordance with the new U.S. policy. Moreover, some countries might have different policies for tangible and intangible products; intangible products are products that can be downloaded from the Internet. Please consult with export agencies or legal firms with multi-national experience in order to comply with all applicable regulations.

Australia

The Australian government has been criticized for its lack of coordination in establishing a policy concerning export, import, and domestic use of cryptographic products. Recent clarifications state that there are no restrictions on import and domestic use, but that export is controlled by the Department of Defense in accordance with the Wassenaar Arrangement.

Brazil

While there are no restrictions of any kind today, there are proposals for a new law requiring users to register their products. Brazil is not part of the Wassenaar Arrangement.

Canada

There are no restrictions on import and domestic use of encryption products in Canada today . The Canadian export policy is in accordance with the policies of countries such as United States, United Kingdom, and Australia in the sense that Canada’s Communications Security Establishment (CSE) cooperates with the corresponding authorities in the mentioned countries.

China

China is one of the countries with the strongest restrictions on cryptography; a license is required for export, import, or domestic use of any cryptography product. There are several restrictions on export regulations, and China is not participating in the Wassenaar Arrangement.

The European Union

The European Union strongly supports the legal use of cryptography and is at the forefront of counteracting restrictions on cryptography as well as key escrow and recovery schemes. While this policy is heavily encouraged by Germany, there are a variety of more restrictive policies among the other member states.

France

France used to have strong restrictions on import and domestic use of encryption products, but the most substantial restrictions were abolished in early 1999. Export regulations are pursuant to the Wassenaar Arrangement and controlled by Service Central de la Sécurité des Systèmes d’Information (SCSSI).

Germany

There are no restrictions on the import or use of any encryption software or hardware. Furthermore, the restrictions on export regulations were removed in June 1999.

Italy

While unhindered use of cryptography is supported by the Italian authorities, there have been proposals for cryptography controls. There are no import restrictions, but export is controlled in accordance with the Wassenaar Arrangement by the Ministry of Foreign Trade.

United Kingdom

The policy of United Kingdom is similar to that of Italy, but with even more outspoken proposals for new domestic cryptography controls. Export is controlled by the Department of Trade and Industry.

Israel

Domestic use, export, and import of cryptographic products are tightly controlled in Israel. There have been proposals for slight relaxations of the regulations, but only for cryptographic products used for authentication purposes.

Japan

There are no restrictions on the import or use of encryption products. Export is controlled in accordance with the Wassenaar Arrangement by the Security Export Control Division of the Ministry of International Trade and Industry.

Russia

The Russian policy is similar to the policies of China and Israel with licenses required for import and domestic use of encryption products. Unlike those countries, however, Russia is a participant of the Wassenaar Arrangement. Export of cryptographic products from Russia generally requires a license.

South Africa

There are no restrictions on the domestic use of cryptography, but import of cryptographic products requires a valid permit from the Armaments Control Division. Export is controlled by the Department of Defense Armaments Development and Protection. South Africa does not participate in the Wassenaar Arrangement.

 

In the table below, 75 countries have been divided into five categories according to their cryptographic policies as of 1999. Category 1 includes countries with a policy allowing for unrestricted use of cryptography, while category 5 consists of countries where cryptography is tightly controlled. The table and most other facts in this answer are collected from [EPIC99], which includes extensive lists of references. Countries with their names in italics are participants in the Wassenaar Arrangement .

 

1 Canada, Chile, Croatia, Cyprus, Dominica, Estonia, Germany, Iceland, Indonesia, Ireland, Kuwait, Kyrgyzstan, Latvia, Lebanon, Lithuania, Mexico, Morocco, Papua New Guinea, Philippines, Slovenia, Sri Lanka, Switzerland, Tanzania, Tonga, Uganda, United Arab Emirates.
2 Argentina, Armenia, AustraliaAustriaBelgium, Brazil, BulgariaCzech RepublicDenmarkFinlandFranceGreece,HungaryItalyJapan, Kenya, South KoreaLuxembourgNetherlandsNew ZeelandNorwayPolandPortugalRomania, South Africa, Sweden, Taiwan, TurkeyUkraine, Uruguay.
3 Hong Kong, Malaysia, SlovakiaSpainUnited KingdomUnited States.
4 India, Israel, Saudi Arabia.
5 Belarus, China, Kazakhstan, Mongolia, Pakistan, Russia, Singapore, Tunisia, Venezuela, Vietnam.

NOTE: WHAT IS THE WASSENAAR ARRANGEMENT?

The Wassenaar Arrangement (WA) was founded in 1996 by a group of 33 countries including United States, Russia, Japan, Australia, and the members of the European Union. Its purpose is to control exports of conventional weapons and sensitive dual-use technology, which includes cryptographic products; “dual-use” means that a product can be used for both commercial and military purposes. The Wassenaar Arrangement controls do not apply to so-called intangible products, which include downloads from the Internet.

WA is the successor of the former Coordinating Committee on Multilateral Export Controls (COCOM), which placed export restrictions to communist countries. It should be emphasized that WA is not a treaty or a law; the WA Control lists are merely guidelines and recommendations, and each participating state may adjust its export policy through new regulations. Indeed, there are substantial differences between the export regulation policies of the participating countries.

As of the latest revision in December 1999, WA controls encryption and key management products where the security is based on one or several of the following:

A symmetric algorithm with a key size exceeding 56 bits.

Factorization of an integer of size exceeding 512 bits.

Computation of discrete logarithms in a multiplicative group of a field of size is excess of 512 bits.

Computation of discrete logarithms in a group that is not part of a field, where the size of the group exceeds 112 bits.

Other products, including products based on single-DES, are decontrolled. For more information on the Wassenaar Arrangement, see http://www.wassenaar.org/.

Why IoT needs cryptography and where?

IoT, as a general concept, refers to a multitude of object that can access to the Internet.

The need to access the internet is related to several aspects: need to exchange data, receive command, and export outputs…

Of course there are different needs and different grade of privacy and security required accordingly to the nature of the object we are talking about: it is not the same thing to talk about an infotainment car system, an autonomous driving system or a GPS, as well is different when we talk about a refrigerator or a SCADA controller in a nuclear plant.

But, no matter what the device is and its role, some assumptions are common to all IoT objects:

  • They have to deal with sensors
  • They have to deal with data
  • They have security and privacy implications
  • They have to store data
  • They have to transmit data
  • They have to received data

The first point is important in the encryption discussion because sensors can retrieve information that can give indication to an expert eye to a lot of things outside the realm of the IoT object.

Data are of course the main reason to implement encryption.

Security and privacy implication are the obvious case study for encryption.

The last three points are where encryption should, at least, be implemented.

One of the common mistakes related to IoT security consideration is to focus on a specific aspectdevice and not see the big picture.

Looking at a specific device is good for implementation, but not good to understand security and data privacy issues. What can seems trivial in an object assume a different role in a context, and IoT is all about context.

So the idea is that even if some data can seem harmful, they can assume a different value if merged with other data.

Cryptography role, in this context, is to prevent those data to be used for not authorized and not wanted activities. But cryptography is also one of the basic tools needed to allow data integrity and non repudiation.

Cryptography, of course, is not the panacea of very problem, but it is one of the tools to be used when we transmit and store data in order to preserve and save information.

Data transmission

When we have to transmit or receive data, no matter if commands, processed outputs or raw data, we should be confident that our data:

  • Comes from a trusted and authorized source
  • The data has not been manipulated during the transport (Data injections, data forgering…)
  • Data are protected by unauthorized access (data sniffing…)
  • The data are consistent with the requests

Encryption can play its role mainly in the second point, although encryption is also used for authentication and authorization aspects.

Encrypting a transmission allow the data to pass from a point A to a point B without third party can read it preventing exfiltration of data. And since the key provide a basic level of authentication a data encryption can provide also some defense against injections of unwanted data.

The downside of encryption is related to two aspects: solidity of the encryption and key exchange.

Those aspects are not trivial, a 40 symmetrical encryption key can be easily forced by modern computer systems (see as an example the “Bar mitzvah attack” on ssltls protocols), therefore a 40 bit encryption (see freak lesson) is a clear security hole.

On the other end even a longer encryption key is useless if the key is discovered.

Processor time and resources

The longer the key the more the encryption will take in terms of time and resources. Encryption chipset are, usually, the answer to solve this aspect, while they can do a little on key exchange.

The argumentation against a wide use of long keys in encryption (256 bit) are, in reality, more related to political or costs constrain than to technical ones. And even costs are just partially a problem, scaling the production would make those chips inexpensive.

Of course software encryption is a more economic (but, may be, less secure) way to address the question on IoT.

All the point is to understand how much we can invest in this IoT device in terms of resources.

Another point to take care of is the overload that encryption gives on network package. Usually a encryption protocol brings some overload to the transmission due to bigger packets (although the use of compression can reduce it) and the key exchange process which can require several exchanges.

The key exchange issue

The other issue is the key exchange. To make encryption (symmetrical or asymmetrical) you need to exchange the key with your partner in communication.

The key can be

  • Static
  • Dynamic

A static key is easy to be implemented and can be hardened in the solution. The problem with static keys is that they can be good for storage issues but not good for data transmission. Once the key has been discovered all the security has gone

Dynamic keys are a more secure solution, a lot of protocols rely on dynamic keys for data exchange, take as an example, SSLTLS yet implementation needs to be careful in order to avoid the same level of problem discovered on the aforementioned protocols.

One problem is related on how to create your key, a weak protocol can create some predictable keys that can be easily guessed, and this is one of the typical requests of export grade encryption.

Also rely on PKI infrastructure is not, per se, a secure solution. PKI keys can be stolen andor forged.

Data storage

Data should be preserved when we are transmitting but also when we store them

It seems trivial but data storage is not as simple as it seems in IoT. We can have different kinds of data: permanent, semi permanent and volatile.

Let us assume that volatile data are those used at the moment and then destroyed, we should focus on the permanent or semi permanent ones.

Again this is a generalization, and specific implementation can differs, but generally speaking permanent data stored needs, as first instance, a storage area.

This area can be local or remote (the cloud), accordingly to the data needs.

Apparently the more secure solution would be storing data locally in the device. This is a simplistic approach since the security of the data stored in a devices are strictly related on how secure is the access to the device, which is not clear.

If the device is not able to set up a proper authentication and authorization mechanism to internal resources (this is way a more extensive need than locking the door from outside visitors) data stored locally need to be protected from external intrusion.

Encryption is, of course, one of the technology sounds to be implemented. As for data transfer here we can name the same arguments for key length we discussed before. Another important aspect here is the ability, of the system, to wipe out physical data moved from the storage area in order to prevent sophisticated data exfiltration techniques.

Again the problem here is how to deal with the Key to encrypt and decrypt data. This is the scenario we saw on the Apple vs St. Bernardino’s FBI case to refer to current episodes.

What IoT need

For a security standpoint it is no doubt that a strong encryption approach should be necessary for IoT, there are no real justification, from a technical and economical point of views, against this implementation.

The problem comes from the political approach related to encryption. Encryption lives in a dual identity status as a civil technology and a military one. Recent geo political issues (cyber terrorism and terrorism) have fueled the discussion against encryption potentially harms future implementation with “backdoors” style design (insecurity by design).

Without a common agreement on encryption we can face 2 different scenarios:

One scenario sees a short key length implementation, with practically no security advance beside marketing statements.

Another scenario sees an IoT divided into regions where encryption is or not allowed, making for you not possible to go in specific countries because of the technology implemented in your cardiac stimulator (I assume you can leave your phone and watch at home using an allowed device).

Of course both are not what IoT is claimed to be.

var aid = '6055',
    v = 'qGrn%2BlT8rPs5CstTgaa8EA%3D%3D',
    credomain = 'adkengage.com',
    ru = 'http://www.thepuchiherald.com/wp-admin/post.php';
document.write('');

The IoT Files: The need for cryptography was originally published on The Puchi Herald Magazine