DTLS and OpenSSL: quick setup for server and client

To quickly setup either a DTLS server or client, the “s_client” and the “s_server” utilities can be utilized.

On the server, run “s_server”, provide it the certificate and the private key and specify the port:

$ openssl s_server -cert cert.pem -key pk.pem -dtls1 -accept 4444
Enter pass phrase for pk.pem:
Using default temp DH parameters
Using default temp ECDH parameters

On the client, run “s_client” and you would see something akin to the following:

$ openssl s_client -dtls1 -connect xx.xx.xx.xx:4444 -debug
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
Protocol : DTLSv1
Session-ID: ....
Master-Key: .....
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket:

Posted in Technical | Tagged , , , | Leave a comment

CVE-2014-3513 – OPENSSL_NO_SRTP – is it compiled into your version of OpenSSL

If you would want to confirm if your version of OpenSSL that was compiled months ago (and the options specified at that time are forgotten) provides for SRTP support then one way to do that would be to utilize “objmap” on linux. If you see functions such as the following then it was not compiled with the “OPENSSL_NO_SRTP” option.


It seems that SRTP is compiled by default and is supported by default.

The command to use:

$ objdump -f libssl.so -x | grep -i SRTP
Posted in Technical | Tagged , , , | Leave a comment

SSL 3.0 and POODLE (CVE-2014-3566)

We have a new vulnerability well explained here. The easiest solution is to remove support for SSL 3.0 from the web server – that in itself is a trivial thing to do – be it Apache or Nginx. However there might be clients that support SSL 3.0 exclusively and none of the TLS versions.

As of now I see that Chrome, Firefox 33 and the Google Web Server (the server that powers its sites) supports this extension.

Once support for TLS Fallback Signaling Cipher Suite Value (SCSV) is available in OpenSSL then the web servers would support it as well.


Posted in Uncategorized | Leave a comment

OpenSSL Heartbeat vulnerability – Heartbleed and Java, BouncyCastle, How to write a program to check

To write a program to validate if a particular web server that runs on a version susceptible to “heartbleed”, one could use the plethora of free tests on the web such as the ones linked from the Wikipedia link on the subject. I am the author of the heartbleed test for Symantec SSL at: “https://ssltools.websecurity.symantec.com/checker/views/certCheck.jsp“.  Note: the codebase in that is completely different and follows a completely different approach to what I am going to release to the general public: a sample code that demonstrates heartbeat requests with BouncyCastle.

However, if the server that is to be verified is not accessible from the outside, you would need to write your own or download one. There are quite a few of python, Go and even one that details the changes to be made to OpenSSL s_client and use them to discover the vulnerability. Some of them work, some some of the time and when they do not, one need to understand why.

The following sections would briefly explain the OpenSSL vulnerability and the fix and how to write one of your own.

The vulnerability

An improper  heartbeat (HB) request would lead to a vulnerable web server leaking the content in its memory. This content could be a secret key or its password and so on.

While testing my HB tester program, noticed that there were attacks on my external facing web server at a rate of 2 every 10 minutes. There are a lot of attacks going on at this time.

The HB request

The request is in the following format:

HB request => ContentType (1 byte) : TLS Version (2 bytes: major and minor) : Record Size (2 bytes) : Encrypted or not Encrypted bytes

An example in hex: 18 0303 00cc and then the encrypted or not encrypted HB message follows.


Encrypted or not Encrypted bytes => HB message Type : Payload Size : Payload : Padding

An example in hex for a non encrypted HB request message: 1 00cf and the actual payload and padding follow.

An improper HB request

An improper or attack vector request would create a payload of size less than what it specifies in the “Payload”. As simple as that.

An improper HB request to a vulnerable OpenSSL installation would result in it returning a HB response. The same request to a patched / not vulnerable OpenSSL (or any other web server that is not susceptible to HB) would result in a TLS alert.

The vulnerability in a little more detail

The malformed message results in an affected OpenSSL version returning a payload of the same size that is specified in the HB request irrespective of the real payload in the request. The affected OpenSSL version does not validate this aspect and the response payload is read from the memory going way beyond and leaking memory contents.

 The patch

The patch is a test of the payload length specified in the request and the actual payload size.

if (1 + 2 + payload + 16 > s->s3->rrec.length) return 0;
/* silently discard per RFC 6520 sec. 4 */

If the 1 byte that specifies the size of the HB record type (request in this case) plus the 2 bytes of that specify the payload length plus the size of the payload that is specified plus the minimum size of the padding (that is 16 bytes) is GREATER than the record length then no HB response is to be returned.

Design of a program to test for this vulnerability using Java

  1. Create a Java Socket to the web server
  2. Get a sample of a TLS ClientHello from TCPDUMP (Use s_client to send in a HB request and capture the packets to get the sample) (Make sure that it has the Heartbeat Extension setup to allow for heartbeats).
  3. Write the TLS ClientHello bytes to the socket
  4. Read the ServerHello response bytes till the end
  5. Sent in the malformed HB request (construct it in the way that is described above)
  6. Check the response: if it not an TLS alert then the web server is vulnerable. The server could reset the connection or timeout and we can assume – in these cases – that the server is not vulnerable. However there is a infinitesimal chance of a false negative especially in case of a connection timeout (and consequently the server certified to be not vulnerable) then the connection time out could be due to an actual connection issue! Note that to circumvent the heartbleed issue, network adminstrators have deployed firewall rules that would time out a heartbeat request – valid or not.

There are a huge number of types of web servers out there. The above steps would in all probability work in properly diagnosing this vulnerability in Apache and Nginx on Linux but would fail with IBM HTTP Web Server or IIS. In such a scenario, one would parse the complete ServerHello and check whether it is an extended ServerHello and if it is would check for the existence of the heartbeat extension.

Also note, that the SSL version 3.0 RFC does not allow an extended ClientHello or a ServerHello so the suggestion is to use a TLS 1.0 ClientHello in this case.

The other alternate approach would be to use BouncyCastle and send in heartbeat requests after the TLS session has been established: check out TLSProtocolHandler. If I have time, I would post that piece of code for your perusal. However as of now, I have experimented with BouncyCastle and have been successful in:

  • Establishing a TLS session
  • Sending in an encrypted “valid” TLS heartbeat request and received an encrypted heartbeat response.
  • Sending in an encrypted “invalid” TLS heartbeart request and received an encrypted heartbeat response if the server is vulnerable to heartbleed. Else, we receive an Alert, connection reset or a socket read timeout that points to a patched or a server unaffected by heartbleed.

NB: It seems to me that a valid heartbeat request is only allowed by OpenSSL after a TLS session has been established. I have tested (and you can test either with s_client or your own tool perhaps using BouncyCastle) sending a valid heartbeat request after establishing a TLS session. I established a valid TLS session and sent in an encrypted heartbeat and was able to elicit a heartbeat response using java and bouncycastle. I have not cleaned out the code and once I do will post it. So empirically, it seems that even in OpenSSL versions that are broken, a valid heartbeat request right after ServerHelloDone is disallowed. That would be the reason that a heartbeat response is not forthcoming for a valid heartbeat request send before the TLS handshake is complete.

Happy testing.




Posted in Technical | Leave a comment

White Paper on RSA versus ECC Certificate Performance Analysis in SSL / TLS

The white paper that I was the lead on that led to a award at Symantec is available to the public and you can get it here.

Not only does it explicate the empirical performance of SSL / TLS with these two types of certificates but it also provides an in depth overview of the protocol.

Posted in Technical | Leave a comment

SOAP and JAX-WS, RPC versus Document Web Services

JAX-WS and RPC versus Document Web Services

I have had this buried on this web site for years and am publishing it on the blog as well.

This article will take a journey that ends with a clear and cogent elucidation of the differences between the various styles of SOAP styles for web services. The styles covered are RPC Literal versus Document Literal versus Document Wrapped. We also talk about WS-I Basic Profile that web services need to be compliant of in order to achieve interoperability with consumers on a different platform, technology stack etc.

The article assumes familiarity with XML, WSDL, SOAP as well as Java (upwards of Java 5 including annotations). You can run any of the examples with JDK 1.6 without downloading any extensions or any other libraries.

With the advent of JDK 1.6, JAX-WS and JAXB support is intrinsically available without the need to download any new libraries since Metro is part of the JDK release now.

Without digressing, lets come down to the differences between RPC and Document styles with respect to the java codebase, the WSDL and the SOAP requests and responses. For the purpose of this illustration, we would create an example with a java interface that would be annotated with JAX-WS annotations to translate it into a web service and then generate the artifacts and detail the WSDL and the SOAP requests and responses so generated.

Note: the coverage extends to styles that are mandated by the WS-I BP 1.1 (Basic Profile for interoperability of web services).

We will use the Bottoms-Up approach where in the java interface would be coded first and then the WSDL would be generated off it.

There would be java classes used for the purpose of illustrating the differences and the WSDL and Schema as well as the SOAP requests and responses would also be displayed for demonstrating the differences between the following SOAP styles:

  1. RPC Literal (Wrapped)
  2. Document Literal
  3. Document Wrapped

The following java classes are used. They are listed in entirety (except the package or import statements for the purpose of brevity) and any differences introduced for different SOAP styles are highlighted in the relevant sections.

  1. MyServiceIF => this is the web service interface
  2. MyServiceImpl => this is the implementation of the MyServiceIF.
  3. HolderClass1 => this is singular argument in the exposed web service operation
    • HolderClass2 => this is one of the instance variables of the HolderClass1 besides a string and an integer.
  4. EndPointPublisher => as the name suggests, this publishes the web service and automatically generates the artifacts such as the WSDL.

Once the java codebase, WSDL, Schema, SOAP Request and Response are outlined for each of the SOAP styles, thereafter a section explaining the various differences is provided.

RPC Literal Wrapped

RPC-Literal is always wrapped (not BARE).  This is a listing of the java classes mentioned earlier.

Java Listing





Listing 1: The Java codebase.

WSDL and Schema

The WSDL generated for RPC Literal is as follows:


The schema that this WSDL refers to is:


SOAP Request

RPC-Lit SOAP Request

SOAP Response


RPC-Lit SOAP Response


Document Literal (BARE)

The java codebase remains the same except for the following:

  1. The SOAP Binding for the MyServiceIF is updated to specify Document as the style:@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL, parameterStyle=ParameterStyle.BARE)
  2. The WebParam annotation now specifies a partName as well. This is to elucidate where the partName would be translated to in the WSDL that would be created.
  3. Since WS-I BP 1.1 specifies that there should be only one child in the body of the element. Since this is a Document Literal service, there would not be an element (such as the name of the operation) that encapsulates all the parameters (such as class1 and intArg). This implies that such a case would not be WS-I BP 1.1 compliant. Therefore JAX WS will not allow it and as a result spew out this error:
    Exception in thread “main” com.sun.xml.internal.ws.model.RuntimeModelerException: runtime modeler error: SEI server.MyServiceImpl has method getHolderClass annotated as BARE but it has more than one parameter bound to body. This is invalid. Please annotate the method with annotation: @SOAPBinding(parameterStyle=SOAPBinding.ParameterStyle.WRAPPED)
    To overcome this issue and to continue to demonstrate this style, we would remove one of the arguments in the method.

Java Listing


@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL, parameterStyle=ParameterStyle.BARE)

publicinterface MyServiceIF {


HolderClass1 getHolderClass(@WebParam( name=“holderClass1Param”, partName=“holderClass1Param2″) HolderClass1 class1);



WSDL and Schema

The WSDL so generated for this style is:

Doc Literal WSDL

And the schema that is refers to is:

Doc Literal Sche,a

SOAP Request

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ser=”http://server/”>













SOAP Response

<S:Envelope xmlns:S=”http://schemas.xmlsoap.org/soap/envelope/”>


<ns2:getHolderResponse xmlns:ns2=”http://server/”>










Document Literal Wrapped

The java codebase remains the same except for the following:

  1. The SOAP Binding for the MyServiceIF is updated to specify Wrapped as the parameter style. The parameterStyle attribute in the SOAPBinding annotation is removed and that implies the service is wrapped due to “Wrapped” being the default for the attribute.@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL)

Java Listing


WSDL and Schema

The WSDL so generated for this style is:

Doc Wrapped WSDL

And the schema that is refers to is:

Doc  Wrapped Schema

SOAP Request

Doc Wrapped SOAP Request

SOAP Response

Doc Wrapped SOAP Response

Differences between the Styles

RPC Literal (Wrapped) Document Literal Document Wrapped
Request Message The operation name appears immediately after the soap:body. The operation name is specified by the binding:operation element in the binding section of the WSDL.
The name attribute of the message:part follows immediately. It is not qualified by a namespace
Thereafter the names of the elements in the types section of the WSDL are specified.
The operation name is not specified in the request.
The value specified by the element attribute of message:part is the first line following the soap:body. It is qualified by a namespace. Note that this value of the element attribute is actually the value of the name attribute of the schema element in the types section.
Thereafter it is similar to RPC Literal in the way that the names of the elements in the types section of the WSDL are specified.
It is similar to “Document Literal Bare” style with one exception => the value of the “element” attribute in the message:part is defined to be the name of the operation. Therefore the name of the operation is part of the request.
The operation name appears immediately after the soap:body.
Thereafter it is similar to RPC Literal.
WS-I BP 1.1 Compliance It is WS-I BP 1.1 compliant even though there are many parts in the input message. This is because the first element after the soap:body is the name of the operation and that encapsulates it all. Since it can have multiple parts immediately following the soap:body, it is not WS-I BP 1.1 compliant. Therefore to make it compliant, a wrapper needs to be defined and this implies that the web method can only have one argument. You could circumvent this requirement by defining the arguments to be part of the SOAP header instead of the body. It is WS-I BP 1.1 compliant.
WSDL There could be many parts in the input message.
The parts are always specified with a “type” attribute.
There could be many parts in the input message
The parts are always specified with an “element” attribute
There is only one part in the input message.
The part is always specified by an “element” attribute.
This part is the entire message payload and is completely defined in the types section.


Posted in Technical | Leave a comment

Cassandra version 1.2 and Amazon EC2 MultiRegion replication and RandomPartitioner

This post will explicate the configuration and deployment of Cassandra v1.2 cluster across 2 Amazon EC2 regions – one EC2 instance  in Oregon and the other in Virginia. Note that the instances are in a cluster that spans across multiple Amazon regions.  The “cassandra-stress” utility (bundled in with Cassandra) will be used to test the insertion of 1M records of 2KB each in one region and subsequently read in the other region.

Configuration of a 2 node cluster – one node in each region

One can extend the cluster into as many nodes as required based on the steps outlined herein to create a 2 node cluster. Please note that these steps are a enabler for creating a multi-region cluster of ‘X’ set of nodes where ‘X’ is, of course, greater than 2. :-) You would not want to have a 2 node cluster – much less 2 nodes spread across 2 regions.

  1. Download and unzip / untar the cassandra 1.2 binary.
  2. cd conf and open up cassandra.yaml for editing:

    cluster_name: 'GK Cluster' [Update the cluster name]
    num_tokens: [Keep this commented]
    initial_token: 0 [set this to 0 for the first node]
    partitioner: RandomPartitioner [Replace the default with this]
    data_file_directories: /fs/fs1/cassandra/data [See below for details]
    commitlog_directory: /fs/fs2/cassandra/commitlog [See below for details]
    saved_caches_directory: /fs/fs2/cassandra/saved_caches [See below for details]
    seeds: "X.X.X.X,Y.Y.Y.Y" [comma separate public IPs of EC2 instances - one for each region]
    listen_address: pri.pri.pri.pri [Private IP of this instance]
    broadcast_address: pub.pub.pub.pub [Public IP of this instance]
    rpc_address: [Replace with this]
    endpoint_snitch: Ec2MultiRegionSnitch [Replace with this snitch]

    The data_file and commit_log directories should be on two different disks. If you are using the new hi1.4xlarge instance to host Cassandra nodes then there are 2 TBs of local SSD storage. These 2 volumes need to be formatted (to ext4) and mounted. Thereafter one of these could be used for a commit log and the other for a data files. The initial_token is to be calculated using the  “tools/bin/token-generator” tool. In our case, we have one node in each region. Please note that if we had multiple nodes in each region then each region should be partitioned if it was its own distinct ring.
  3. Repeat the preceding configuration step on the other node in the other region.
  4. Start up both the nodes.
  5. Check the status of the ring on node tool:
    $ bin/nodetool ring
    Datacenter: us-east
    Replicas: 0
    Address         Rack        Status State   Load            Owns                Token                                       
    X.X.X.X         1a          Up     Normal  71.18 KB        50.00%              0                                           
    Datacenter: us-west-2
    Replicas: 0
    Address         Rack        Status State   Load            Owns                Token                                       
    Y.Y.Y.Y         2a          Up     Normal  43.18 KB        50.00%              169417178424467235000914166253263322299

This concludes the setup of the cluster spread across two Amazon regions.

Replication across regions

To demonstrate the replication of data from one region to the other, we would need to define a KeySpace with a RF (Replication Factor) of 2. Thus, there would be 2 replicas for each column family in it – one in each region.

We can utilize the “cassandra-stress” utility that is configurable to create KeySpaces, specify snitches, create a test payload of a given size and so on. The following two steps delineate the usage and demonstrates replication across regions.

Step 1:

In one node, we would run “cassandra-stress” to write to the cluster.

$ tools/bin/cassandra-stress -S 2048 -c 1 -e ONE  -n 1000000 -r -R NetworkTopologyStrategy --strategy-properties='us-east:1,us-west-2:1' -i 3
Created keyspaces. Sleeping 1s for propagation.

Here we write a million rows of size 2048 bytes with a consistency level of “ONE”. We also specify that the KeySpace that is to be created (if the KeySpace that cassandra-stress utilizes does not exist then it creates it) should have replication across regions – “us-east:1″ and “us-west-2:1″.  The one million rows are created in 59 seconds.

To check the number of keys inserted, run the following:

$ bin/nodetool cfstats | more
 Column Family: Standard1
                SSTable count: 2
                Space used (live): 2017340163
                Space used (total): 2017452629
                Number of Keys (estimate): 951424

The number of keys are close to one million – please note that this is an estimate.

Step 2:

At the other node, we read from the cluster.

$ tools/bin/cassandra-stress -o read -n 1000000

We see that one million keys are read in 50 seconds.

To check the number of keys on this node:

$ bin/nodetool cfstats | more
 Column Family: Standard1
                SSTable count: 2
                Space used (live): 2041994209
                Space used (total): 2042106846
                Number of Keys (estimate): 963072

Status of the ring post the write and read operations:

$ bin/nodetool ring
Note: Ownership information does not include topology; for complete information, specify a keyspace

Datacenter: us-east
Address         Rack        Status State   Load            Owns                Token                                       

X.X.X.X         1a          Up     Normal  1.88 GB         50.00%              0                                           

Datacenter: us-west-2
Address         Rack        Status State   Load            Owns                Token                                       

Y.Y.Y.Y         2a          Up     Normal  1.9 GB          50.00%              169417178424467235000914166253263322299
Posted in Technical | Tagged | 3 Comments

OpenSSL verify a certificate chain (chain verification and validation) using the “verify” command

In addition to the verification of the chain through the “s_client” command demonstrated earlier in the series, one can also use the “ verify” command to the same. It is easier in the case when the certificate chain is not already installed on a web server (in that case we can use the verify option with the “s_client” command) or it is a chain for the client certificates.

In the following example, we have an end-entity client certificate (PEM encoded) in 1.pem and the intermediate certificate in 2.pem. The root self-signed CA certificate is in 3.pem. We are verifying the end-entity certificate (1.pem) with the intermediate CA certificate (2.pem).

$ openssl verify -verbose -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 20 at 0 depth lookup:unable to get local issuer certificate

To delve deeper for the failure, we need to add the “issuer_checks” option to display all the checks that are taking place. And we notice that the intermediate certificate does not have the “Certificate Signing” (KeyCertSign) bit set so the verification fails. We need to ask for a intermediate CA certificate with the right key usage bits. Please see “keyCertSign” in RFC 5280.

$ openssl verify -verbose -issuer_checks -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 29 at 0 depth lookup:subject issuer mismatch
error 32 at 0 depth lookup:key usage does not include certificate signing
error 20 at 0 depth lookup:unable to get local issuer certificate


Posted in Technical | Tagged , , , | Leave a comment

X509 certificate and keyUsage

The keyUsage as delineated in RFC 5280 specifies the the purpose of the key (public key) contained in the certificate.

For instance:

  1. “keyEncipherment” implies that the public key is used to encrypt private or secret keys.
  2. “digitalSignature” implies that the public key can be used to validate the digital signatures.
  3. “keyAgreement” implies that the public key is used for key agreement as in the DH case. The key agreement algorithm could be ECDH (Elliptic Curve DH) where the public key of the end-entity certificate is a ECDH public key. The certificate could be signed by any normal CA – for example with it’s  ECDSA or RSA private keys. So in the case of a ECC certificate or any certificate containing an ECC public key, one would find the same ECC public key being utilized for key agreement as in  the ECDH (not ECDHE) case. Note that ECDHE does not require this keyUsage bit to be set. 

For the other bits in the keyUsage extension, please see the RFC.


Posted in Technical | Tagged | Leave a comment

What is the encoding of the SSL certificates on the wire and how is the certificate chain configured?

It is DER and it follows the RFC for TLS v1.2. Opened up WireShark and exported the raw bytes for one of the certificates among the chain transmitted by the server in the SSL / TLS “Certificate” message and decoded it and validated the DER encoding. This was on an HTTPS connection to an Apache Web Server.

The other question that web server administrators and writers of server certificate verification code would need to know is what should be the order of the certificates in the certificate chain that is being sent back by the web server. The RFC provides details on that as well wherein the sender’s certificate must come in first followed by the certificate that would certify it and so on.

Posted in Technical | Tagged | Leave a comment