White Paper on RSA versus ECC Certificate Performance Analysis in SSL / TLS

The white paper that I was the lead on that led to a award at Symantec is available to the public and you can get it here.

Not only does it explicate the empirical performance of SSL / TLS with these two types of certificates but it also provides an in depth overview of the protocol.

Posted in Technical | Leave a comment

SOAP and JAX-WS, RPC versus Document Web Services

JAX-WS and RPC versus Document Web Services

I have had this buried on this web site for years and am publishing it on the blog as well.

This article will take a journey that ends with a clear and cogent elucidation of the differences between the various styles of SOAP styles for web services. The styles covered are RPC Literal versus Document Literal versus Document Wrapped. We also talk about WS-I Basic Profile that web services need to be compliant of in order to achieve interoperability with consumers on a different platform, technology stack etc.

The article assumes familiarity with XML, WSDL, SOAP as well as Java (upwards of Java 5 including annotations). You can run any of the examples with JDK 1.6 without downloading any extensions or any other libraries.

With the advent of JDK 1.6, JAX-WS and JAXB support is intrinsically available without the need to download any new libraries since Metro is part of the JDK release now.

Without digressing, lets come down to the differences between RPC and Document styles with respect to the java codebase, the WSDL and the SOAP requests and responses. For the purpose of this illustration, we would create an example with a java interface that would be annotated with JAX-WS annotations to translate it into a web service and then generate the artifacts and detail the WSDL and the SOAP requests and responses so generated.

Note: the coverage extends to styles that are mandated by the WS-I BP 1.1 (Basic Profile for interoperability of web services).

We will use the Bottoms-Up approach where in the java interface would be coded first and then the WSDL would be generated off it.

There would be java classes used for the purpose of illustrating the differences and the WSDL and Schema as well as the SOAP requests and responses would also be displayed for demonstrating the differences between the following SOAP styles:

  1. RPC Literal (Wrapped)
  2. Document Literal
  3. Document Wrapped

The following java classes are used. They are listed in entirety (except the package or import statements for the purpose of brevity) and any differences introduced for different SOAP styles are highlighted in the relevant sections.

  1. MyServiceIF => this is the web service interface
  2. MyServiceImpl => this is the implementation of the MyServiceIF.
  3. HolderClass1 => this is singular argument in the exposed web service operation
    • HolderClass2 => this is one of the instance variables of the HolderClass1 besides a string and an integer.
  4. EndPointPublisher => as the name suggests, this publishes the web service and automatically generates the artifacts such as the WSDL.

Once the java codebase, WSDL, Schema, SOAP Request and Response are outlined for each of the SOAP styles, thereafter a section explaining the various differences is provided.


RPC Literal Wrapped

RPC-Literal is always wrapped (not BARE).  This is a listing of the java classes mentioned earlier.

Java Listing

MyServiceIF

MyServiceImpl

HolderClass1

HolderClass2

EndPointPublisher
Listing 1: The Java codebase.

WSDL and Schema

The WSDL generated for RPC Literal is as follows:

RPC-Lit_WSDL

The schema that this WSDL refers to is:

RPC-Lit_Schema

SOAP Request

RPC-Lit SOAP Request

SOAP Response

 

RPC-Lit SOAP Response

 


Document Literal (BARE)

The java codebase remains the same except for the following:

  1. The SOAP Binding for the MyServiceIF is updated to specify Document as the style:@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL, parameterStyle=ParameterStyle.BARE)
  2. The WebParam annotation now specifies a partName as well. This is to elucidate where the partName would be translated to in the WSDL that would be created.
  3. Since WS-I BP 1.1 specifies that there should be only one child in the body of the element. Since this is a Document Literal service, there would not be an element (such as the name of the operation) that encapsulates all the parameters (such as class1 and intArg). This implies that such a case would not be WS-I BP 1.1 compliant. Therefore JAX WS will not allow it and as a result spew out this error:
    Exception in thread “main” com.sun.xml.internal.ws.model.RuntimeModelerException: runtime modeler error: SEI server.MyServiceImpl has method getHolderClass annotated as BARE but it has more than one parameter bound to body. This is invalid. Please annotate the method with annotation: @SOAPBinding(parameterStyle=SOAPBinding.ParameterStyle.WRAPPED)
    To overcome this issue and to continue to demonstrate this style, we would remove one of the arguments in the method.

Java Listing

@WebService

@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL, parameterStyle=ParameterStyle.BARE)

publicinterface MyServiceIF {

@WebMethod(operationName=“getHolder”)

HolderClass1 getHolderClass(@WebParam( name=“holderClass1Param”, partName=“holderClass1Param2″) HolderClass1 class1);

}

 

WSDL and Schema

The WSDL so generated for this style is:

Doc Literal WSDL

And the schema that is refers to is:

Doc Literal Sche,a

SOAP Request

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ser=”http://server/”>

<soapenv:Header/>

<soapenv:Body>

<ser:holderClass1Param>

<holder2>

<i>2</i>

<name>?</name>

</holder2>

<i>1</i>

<name>?</name>

</ser:holderClass1Param>

</soapenv:Body>

</soapenv:Envelope>

SOAP Response

<S:Envelope xmlns:S=”http://schemas.xmlsoap.org/soap/envelope/”>

<S:Body>

<ns2:getHolderResponse xmlns:ns2=”http://server/”>

<holder2>

<i>6</i>

<name>name_holderClass2</name>

</holder2>

<i>2</i>

<name>name_holderClass1</name>

</ns2:getHolderResponse>

</S:Body>

</S:Envelope>


Document Literal Wrapped

The java codebase remains the same except for the following:

  1. The SOAP Binding for the MyServiceIF is updated to specify Wrapped as the parameter style. The parameterStyle attribute in the SOAPBinding annotation is removed and that implies the service is wrapped due to “Wrapped” being the default for the attribute.@SOAPBinding(style=Style.DOCUMENT, use=Use.LITERAL)

Java Listing

Doc-Wrapped_MyServiceIF

WSDL and Schema

The WSDL so generated for this style is:

Doc Wrapped WSDL

And the schema that is refers to is:

Doc  Wrapped Schema

SOAP Request

Doc Wrapped SOAP Request

SOAP Response

Doc Wrapped SOAP Response


Differences between the Styles

RPC Literal (Wrapped) Document Literal Document Wrapped
Request Message The operation name appears immediately after the soap:body. The operation name is specified by the binding:operation element in the binding section of the WSDL.
The name attribute of the message:part follows immediately. It is not qualified by a namespace
Thereafter the names of the elements in the types section of the WSDL are specified.
The operation name is not specified in the request.
The value specified by the element attribute of message:part is the first line following the soap:body. It is qualified by a namespace. Note that this value of the element attribute is actually the value of the name attribute of the schema element in the types section.
Thereafter it is similar to RPC Literal in the way that the names of the elements in the types section of the WSDL are specified.
It is similar to “Document Literal Bare” style with one exception => the value of the “element” attribute in the message:part is defined to be the name of the operation. Therefore the name of the operation is part of the request.
The operation name appears immediately after the soap:body.
Thereafter it is similar to RPC Literal.
WS-I BP 1.1 Compliance It is WS-I BP 1.1 compliant even though there are many parts in the input message. This is because the first element after the soap:body is the name of the operation and that encapsulates it all. Since it can have multiple parts immediately following the soap:body, it is not WS-I BP 1.1 compliant. Therefore to make it compliant, a wrapper needs to be defined and this implies that the web method can only have one argument. You could circumvent this requirement by defining the arguments to be part of the SOAP header instead of the body. It is WS-I BP 1.1 compliant.
WSDL There could be many parts in the input message.
The parts are always specified with a “type” attribute.
There could be many parts in the input message
The parts are always specified with an “element” attribute
There is only one part in the input message.
The part is always specified by an “element” attribute.
This part is the entire message payload and is completely defined in the types section.

 

Posted in Technical | Leave a comment

Cassandra version 1.2 and Amazon EC2 MultiRegion replication and RandomPartitioner

This post will explicate the configuration and deployment of Cassandra v1.2 cluster across 2 Amazon EC2 regions – one EC2 instance  in Oregon and the other in Virginia. Note that the instances are in a cluster that spans across multiple Amazon regions.  The “cassandra-stress” utility (bundled in with Cassandra) will be used to test the insertion of 1M records of 2KB each in one region and subsequently read in the other region.

Configuration of a 2 node cluster – one node in each region

One can extend the cluster into as many nodes as required based on the steps outlined herein to create a 2 node cluster. Please note that these steps are a enabler for creating a multi-region cluster of ‘X’ set of nodes where ‘X’ is, of course, greater than 2. :-) You would not want to have a 2 node cluster – much less 2 nodes spread across 2 regions.

  1. Download and unzip / untar the cassandra 1.2 binary.
  2. cd conf and open up cassandra.yaml for editing:

    cluster_name: 'GK Cluster' [Update the cluster name]
    num_tokens: [Keep this commented]
    initial_token: 0 [set this to 0 for the first node]
    partitioner: RandomPartitioner [Replace the default with this]
    data_file_directories: /fs/fs1/cassandra/data [See below for details]
    commitlog_directory: /fs/fs2/cassandra/commitlog [See below for details]
    saved_caches_directory: /fs/fs2/cassandra/saved_caches [See below for details]
    seeds: "X.X.X.X,Y.Y.Y.Y" [comma separate public IPs of EC2 instances - one for each region]
    listen_address: pri.pri.pri.pri [Private IP of this instance]
    broadcast_address: pub.pub.pub.pub [Public IP of this instance]
    rpc_address: 0.0.0.0 [Replace with this]
    endpoint_snitch: Ec2MultiRegionSnitch [Replace with this snitch]

    The data_file and commit_log directories should be on two different disks. If you are using the new hi1.4xlarge instance to host Cassandra nodes then there are 2 TBs of local SSD storage. These 2 volumes need to be formatted (to ext4) and mounted. Thereafter one of these could be used for a commit log and the other for a data files. The initial_token is to be calculated using the  “tools/bin/token-generator” tool. In our case, we have one node in each region. Please note that if we had multiple nodes in each region then each region should be partitioned if it was its own distinct ring.
  3. Repeat the preceding configuration step on the other node in the other region.
  4. Start up both the nodes.
  5. Check the status of the ring on node tool:
    $ bin/nodetool ring
    
    Datacenter: us-east
    ==========
    Replicas: 0
    
    Address         Rack        Status State   Load            Owns                Token                                       
    
    X.X.X.X         1a          Up     Normal  71.18 KB        50.00%              0                                           
    
    Datacenter: us-west-2
    ==========
    Replicas: 0
    
    Address         Rack        Status State   Load            Owns                Token                                       
    
    Y.Y.Y.Y         2a          Up     Normal  43.18 KB        50.00%              169417178424467235000914166253263322299

This concludes the setup of the cluster spread across two Amazon regions.

Replication across regions

To demonstrate the replication of data from one region to the other, we would need to define a KeySpace with a RF (Replication Factor) of 2. Thus, there would be 2 replicas for each column family in it – one in each region.

We can utilize the “cassandra-stress” utility that is configurable to create KeySpaces, specify snitches, create a test payload of a given size and so on. The following two steps delineate the usage and demonstrates replication across regions.

Step 1:

In one node, we would run “cassandra-stress” to write to the cluster.

$ tools/bin/cassandra-stress -S 2048 -c 1 -e ONE  -n 1000000 -r -R NetworkTopologyStrategy --strategy-properties='us-east:1,us-west-2:1' -i 3
Created keyspaces. Sleeping 1s for propagation.
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
20673,6891,6891,1.2,6.1,114.1,3
70052,16459,16459,0.5,2.7,214.4,6
128452,19466,19466,0.4,2.2,115.8,9
178211,16586,16586,0.4,1.9,115.8,12
226423,16070,16070,0.4,1.8,99.0,15
282094,18557,18557,0.4,1.8,99.0,18
325845,14583,14583,0.4,1.7,99.0,21
387316,20490,20490,0.4,1.5,99.0,24
445825,19503,19503,0.4,1.3,99.0,27
497455,17210,17210,0.4,1.3,43.1,31
551568,18037,18037,0.4,1.2,43.1,34
606531,18321,18321,0.4,1.2,53.5,37
662429,18632,18632,0.4,1.2,96.5,40
716008,17859,17859,0.4,1.1,96.5,43
775517,19836,19836,0.4,1.1,96.5,46
831456,18646,18646,0.4,1.1,96.5,49
875923,14822,14822,0.4,1.1,96.5,52
925837,16638,16638,0.4,1.1,96.5,55
986341,20168,20168,0.4,1.1,96.5,59
1000000,4553,4553,0.4,1.0,96.5,59
END

Here we write a million rows of size 2048 bytes with a consistency level of “ONE”. We also specify that the KeySpace that is to be created (if the KeySpace that cassandra-stress utilizes does not exist then it creates it) should have replication across regions – “us-east:1″ and “us-west-2:1″.  The one million rows are created in 59 seconds.

To check the number of keys inserted, run the following:

$ bin/nodetool cfstats | more
 Column Family: Standard1
                SSTable count: 2
                Space used (live): 2017340163
                Space used (total): 2017452629
                Number of Keys (estimate): 951424

The number of keys are close to one million – please note that this is an estimate.

Step 2:

At the other node, we read from the cluster.

$ tools/bin/cassandra-stress -o read -n 1000000
total,interval_op_rate,interval_key_rate,latency/95th/99th,elapsed_time
140813,14081,14081,0.7,3.7,33.6,10
351872,21105,21105,0.5,2.4,36.7,20
567648,21577,21577,0.5,1.8,37.3,30
786035,21838,21838,0.5,1.7,37.3,40
1000000,21396,21396,0.5,1.3,37.3,50
END

We see that one million keys are read in 50 seconds.

To check the number of keys on this node:

$ bin/nodetool cfstats | more
 Column Family: Standard1
                SSTable count: 2
                Space used (live): 2041994209
                Space used (total): 2042106846
                Number of Keys (estimate): 963072

Status of the ring post the write and read operations:

$ bin/nodetool ring
Note: Ownership information does not include topology; for complete information, specify a keyspace

Datacenter: us-east
==========
Address         Rack        Status State   Load            Owns                Token                                       

X.X.X.X         1a          Up     Normal  1.88 GB         50.00%              0                                           

Datacenter: us-west-2
==========
Address         Rack        Status State   Load            Owns                Token                                       

Y.Y.Y.Y         2a          Up     Normal  1.9 GB          50.00%              169417178424467235000914166253263322299
Posted in Technical | Tagged | 3 Comments

OpenSSL verify a certificate chain (chain verification and validation) using the “verify” command

In addition to the verification of the chain through the “s_client” command demonstrated earlier in the series, one can also use the “ verify” command to the same. It is easier in the case when the certificate chain is not already installed on a web server (in that case we can use the verify option with the “s_client” command) or it is a chain for the client certificates.

In the following example, we have an end-entity client certificate (PEM encoded) in 1.pem and the intermediate certificate in 2.pem. The root self-signed CA certificate is in 3.pem. We are verifying the end-entity certificate (1.pem) with the intermediate CA certificate (2.pem).


$ openssl verify -verbose -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 20 at 0 depth lookup:unable to get local issuer certificate

To delve deeper for the failure, we need to add the “issuer_checks” option to display all the checks that are taking place. And we notice that the intermediate certificate does not have the “Certificate Signing” (KeyCertSign) bit set so the verification fails. We need to ask for a intermediate CA certificate with the right key usage bits. Please see “keyCertSign” in RFC 5280.

$ openssl verify -verbose -issuer_checks -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 29 at 0 depth lookup:subject issuer mismatch
...
error 32 at 0 depth lookup:key usage does not include certificate signing
.....
....
error 20 at 0 depth lookup:unable to get local issuer certificate

 

Posted in Technical | Tagged , , , | Leave a comment

X509 certificate and keyUsage

The keyUsage as delineated in RFC 5280 specifies the the purpose of the key (public key) contained in the certificate.

For instance:

  1. “keyEncipherment” implies that the public key is used to encrypt private or secret keys.
  2. “digitalSignature” implies that the public key can be used to validate the digital signatures.
  3. “keyAgreement” implies that the public key is used for key agreement as in the DH case. The key agreement algorithm could be ECDH (Elliptic Curve DH) where the public key of the end-entity certificate is a ECDH public key. The certificate could be signed by any normal CA – for example with it’s  ECDSA or RSA private keys. So in the case of a ECC certificate or any certificate containing an ECC public key, one would find the same ECC public key being utilized for key agreement as in  the ECDH (not ECDHE) case. Note that ECDHE does not require this keyUsage bit to be set. 

For the other bits in the keyUsage extension, please see the RFC.

 

Posted in Technical | Tagged | Leave a comment

What is the encoding of the SSL certificates on the wire and how is the certificate chain configured?

It is DER and it follows the RFC for TLS v1.2. Opened up WireShark and exported the raw bytes for one of the certificates among the chain transmitted by the server in the SSL / TLS “Certificate” message and decoded it and validated the DER encoding. This was on an HTTPS connection to an Apache Web Server.

The other question that web server administrators and writers of server certificate verification code would need to know is what should be the order of the certificates in the certificate chain that is being sent back by the web server. The RFC provides details on that as well wherein the sender’s certificate must come in first followed by the certificate that would certify it and so on.

Posted in Technical | Tagged | Leave a comment

Nginx 1.2.x and install Elliptic Curve Crytpography (ECC) support – installation on Linux (.configure options and build for SSL / TLS support and enable HTTPS)

As of writing this and as far as I know, the pre-compiled binaries for nginx for various platforms (RedHat / CentOS or another linux variant) do not come with ECC support so you would not be able to utilize ECC based certificates (ECDHE key exchange or ECDSA  authentication). The solution is to compile the Nginx source code with an OpenSSL version that has ECC support such as OpenSSL 1.0.1c or 1.0.1e. As of writing, 1.0.1c has a vulnerability (please see the OpenSSL web site for more details) and 1.0.1e is the recommended version.

When comparing with Apache HTTPD web server compilation, I found building Nginx to be simpler based on the fact that one needs to specify the OpenSSL source and Nginx build process takes care of building and linking to it.

After downloading Nginx source, run the following to check the options for configure:

./configure --help

This will list out all options that determine what modules to enable or disable, locations of dependencies such as OpenSSL if they are not in the obvious locations.

Since we have downloaded the OpenSSL source (1.0.1x) to support ECC into a different folder, we need to specify that so the configure option becomes:

./configure --prefix=/app/installs/nginx --with-http_ssl_module --with-openssl=/app/source/openssl/openssl-1.0.1c

This implies that nginx will be installed at “/app/installs/nginx” with the module to add SSL / TLS support and the location of the OpenSSL source is specified as well (this is where the OpenSSL source was extracted).

Thereafter run the following commands:

make

Switch to root if not already so and

make install

Uncomment the HTTPS / SSL sections from the Nginx configuration file and specify the certificates and you are all set.

To check the options for the nginx command line:

nginx --help

To start nginx:

nginx

If you get errors about PCRE at the configure stage or later (error messages replicated below) and if you have previously installed it, update the LD_LIBRARY_PATH environment variable to include the library but if you do not have it installed, there is a section on this blog on installing PCRE. All one has to do is to download the source and simply install that. Another approach is to install the PCRE development libraries. Both approaches are outlined below.

Error Message 1 (at configure time):


./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre= option.

Error Message 2 (later at run time):


nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory

Solution 1:

While configuring nginx, one can specify the location of the source of PCRE source (8.3.1 is the version that I used and can be downloaded from the PCRE website) at the configure step:

./configure ..... ..... ... --with-pcre=/app/source/pcre/pcre-8.31

And repeat the “make, make install” steps as outlined earlier.

Solution 2:

Alternatively if PCRE is already installed then simply point to it (you would need the development libraries):

$ yum search pcre
Matched: pcre ==
opensips-regex.x86_64 : RegExp via PCRE library
pcre.i386 : Perl-compatible regular expression library
pcre.x86_64 : Perl-compatible regular expression library
pcre-devel.i386 : Development files for pcre
pcre-devel.x86_64 : Development files for pcre

Then install the development version:

$ yum install pcre-devel

Time to reconfigure and install Nginx:


$ ./configure ...... [same arguments as above]

A successful “./configure” would have something akin to this output:

$ ./configure .........
checking for OS
+ Linux 2.6.18-128.1.6.el5 x86_64
checking for C compiler ... found
+ using GNU C compiler
+ gcc version: 4.1.2 20080704 (Red Hat 4.1.2-44)
checking for gcc -pipe switch ... found
checking for gcc builtin atomic operations ... found
checking for C99 variadic macros ... found
checking for gcc variadic macros ... found
checking for unistd.h ... found
checking for inttypes.h ... found
checking for limits.h ... found
checking for sys/filio.h ... not found
checking for sys/param.h ... found
checking for sys/mount.h ... found
checking for sys/statvfs.h ... found
checking for crypt.h ... found
checking for Linux specific features
checking for epoll ... found
checking for sendfile() ... found
checking for sendfile64() ... found
checking for sys/prctl.h ... found
checking for prctl(PR_SET_DUMPABLE) ... found
checking for sched_setaffinity() ... found
checking for crypt_r() ... found
checking for sys/vfs.h ... found
checking for nobody group ... found
checking for poll() ... found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
checking for F_READAHEAD ... not found
checking for posix_fadvise() ... found
checking for O_DIRECT ... found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for statfs() ... found
checking for statvfs() ... found
checking for dlopen() ... not found
checking for dlopen() in libdl ... found
checking for sched_yield() ... found
checking for SO_SETFIB ... not found
checking for SO_ACCEPTFILTER ... not found
checking for TCP_DEFER_ACCEPT ... found
checking for TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT ... found
checking for TCP_INFO ... not found
checking for accept4() ... not found
checking for int size ... 4 bytes
checking for long size ... 8 bytes
checking for long long size ... 8 bytes
checking for void * size ... 8 bytes
checking for uint64_t ... found
checking for sig_atomic_t ... found
checking for sig_atomic_t size ... 4 bytes
checking for socklen_t ... found
checking for in_addr_t ... found
checking for in_port_t ... found
checking for rlim_t ... found
checking for uintptr_t ... uintptr_t found
checking for system byte ordering ... little endian
checking for size_t size ... 8 bytes
checking for off_t size ... 8 bytes
checking for time_t size ... 8 bytes
checking for setproctitle() ... not found
checking for pread() ... found
checking for pwrite() ... found
checking for sys_nerr ... found
checking for localtime_r() ... found
checking for posix_memalign() ... found
checking for memalign() ... found
checking for mmap(MAP_ANON|MAP_SHARED) ... found
checking for mmap("/dev/zero", MAP_SHARED) ... found
checking for System V shared memory ... found
checking for POSIX semaphores ... not found
checking for POSIX semaphores in libpthread ... found
checking for struct msghdr.msg_control ... found
checking for ioctl(FIONBIO) ... found
checking for struct tm.tm_gmtoff ... found
checking for struct dirent.d_namlen ... not found
checking for struct dirent.d_type ... found
checking for sysconf(_SC_NPROCESSORS_ONLN) ... found
checking for openat(), fstatat() ... found
checking for getaddrinfo() ... found
checking for PCRE library ... found
checking for PCRE JIT support ... not found
checking for OpenSSL library ... found
checking for zlib library ... found
creating objs/Makefile

Configuration summary
+ using system PCRE library
+ using system OpenSSL library [or the source location]
+ md5: using OpenSSL library
+ sha1: using OpenSSL library
+ using system zlib library

nginx path prefix: “/app/…..”
nginx binary file: “/app/….”
nginx configuration prefix: “/app/…..”
nginx configuration file: “/app/…..”
nginx pid file: “/app/…/nginx.pid”
nginx error log file: “/app/…./logs/error.log”
nginx http access log file: “/app/…../logs/access.log”
nginx http client request body temporary files: “client_body_temp”
nginx http proxy temporary files: “proxy_temp”
nginx http fastcgi temporary files: “fastcgi_temp”
nginx http uwsgi temporary files: “uwsgi_temp”
nginx http scgi temporary files: “scgi_temp”

And then proceed with the make and make install.

Posted in Technical | Tagged | Leave a comment

JMeter (Java) and DNS and SSL and CRL and OCSP

While utilizing JMeter for some load testing of a web service on HTTPS, wanted to confirm the external invocations being made by the program for OCSP and CRL etc. The easiest way is to utilize the “strace” command to display the network system calls:

strace -f -s 1024 -e trace=network ./jmeter.sh

[pid  7361] connect(86, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “::ffff:10.0.0.xx, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
[pid  7361] getsockname(86, {sa_family=AF_INET6, sin6_port=htons(35606), inet_pton(AF_INET6, “::ffff:10.0.0.xx”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
[pid  7361] socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 87
[pid  7361] connect(87, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr(“10.0.0.xx”)}, 16) = 0
[pid  7361] sendto(87, “\226q\1\0\0\1\0\0\0\0\0\0\00274\0010\0010\00210\7in-addr\4arpa\0\0\f\0\1″, 40, MSG_NOSIGNAL, NULL, 0) = 40
So the snippet above determines that there is a DNS call to port 53 of the name server (in bold above).
There are no OCSP calls being made as well. By default all of that is disabled. To allow for OCSP calls and CRL checking, one needs to set the appropriate system properties. Please see: https://blogs.oracle.com/xuelei/entry/enable_ocsp_checking

A snippet to enable OCSP and CRL is:
// params is an instance of PKIXParameters
params.setRevocationEnabled(true);
Security.setProperty("ocsp.enable", "true");
// for CRL
System.setProperty("com.sun.security.enableCRLDP", "true");

Posted in Technical | Leave a comment

RFC 5077: TLS Session Resumption without Server-Side State

If you view the output of SSLDump and if there is evidence of SSL Session Resumption especially if there is “session cache” is not configured on the server then you might be perplexed. I was and after a little investigation was able to attribute it to the implementation of RFC 5077.

Essentially the client sends an empty SessionTicket extension to the server (in the ClientHello message) and the server responds with an empty one if it supports such resumption (in the ServerHello message). The server, later on, after the computation of the “MasterSecret” would encrypt it along with the other session state such as the cipher suite in a “SessionTicket” and return the “NewSessionTicket” message to the client right before the ChangeCipherSpec message.

The following is a screen shot of a “SessionTicket” message / packet from the server to the client captured on WireShark.

Posted in Technical | Leave a comment

OpenSSL’ s_time command simple and short tutorial

A succinct tutorial on s_time and the interpretation of its results

One can install OpenSSL and do a quick check with respect to the performance of a remote server. For instance: the s_time invocation will attempt to make as many connections for a specified period of time. The default period is 30 seconds but one can override that with the appropriate option (“-time”) in this case. With s_time, we can get the numbers of connections per second that are full handshakes as well as resumed handshakes. For details on what “handshake” implies, one could refer to other texts on the web such as the wikipedia page on “Secure Sockets Layer” that has a succinct explanation of the different flavors of handshakes including “resumed” handshakes. Please see the references section below for the link.

The key facet that would like to emphasize is that this command does not invoke the server through concurrent connections but it is sequential and attempts to extract the total time that X connections took in the time (default is 3o secs) specified. For instance, we infer from the run below that for “new” connections, the total number of connections made were 107 and the total time expended in those connections was 1.20 seconds (CPU user time). The test was run for around 30 seconds.

openssl s_time  -cipher 'RSA' -connect host:443 -CAfile chain.pem -www /

Collecting connection statistics for 30 seconds
Collecting connection statistics for 30 seconds
ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt

107 connections in 1.20s; 89.17 connections/user sec, bytes read 44298
107 connections in 31 real seconds, 414 bytes read per connection

Now timing with session id reuse.
starting
trrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

126 connections in 0.07s; 1800.00 connections/user sec, bytes read 52164
126 connections in 31 real seconds, 414 bytes read per connection

From the snippet above, one can also realize that the in the “reuse” (session resumption) case, we see that the number of connections has increased to 126 and it can be extrapolated to 1800 connections per second. Please note that the rest of the 31 seconds, the program was busy in network IO etc.

Also note that if SSL session cache is not setup on the server then s_time will display the same result as for “new” connections. This command does not support RFC 5077: TLS Session Resumption without Server-Side State.

References

  • http://en.wikipedia.org/wiki/Secure_Sockets_Layer [Provides information on SSL / TLS handshakes]
  • http://tools.ietf.org/html/rfc5077 [RFC on TLS Session Resumption without Server-Side State]
Posted in Technical | Leave a comment