Friday, February 1, 2013



The SOAPAction field of the HTTP-Header identifies the purpose of the SOAP-HTTP request. The value of this field is a URI without special requirements due to presence or format. A HTTP client which sends a SOAP request has to use this header field.


The presence of the SOAPAction field of the HTTP header can be used by firewalls to filter SOAP requests.
The field value of an empty String means the purpose of the request is to be determined from HTTP request URI itself (examples line 3).
No field value at all (examples line 4) means there is no hint for the purpose of the message provided.


Saturday, January 26, 2013

Enums why cant we override equals

in the java.lang.Enum class equals method is like

 public final boolean equals(Object other) {
       return this==other;

actually overriding it with something else d
oes not make sense hence its final. As for Enums equals() and == are the same things.

Java Generic < ? > vs < T >

Using "?" is the same as "any", whereas "T" means "a specific type". So, compare these interfaces:

public interface StrictClass {
 public T doFunction(Class class);

public interface EasyClass {
 public < ? >  doFunction(Class class);

When to use which one

There are also use cases for choosing over (or vice versa) that apply when you don't add type parameter to the class that encloses the method. For example, consider the difference between

public boolean add(List j) {
    boolean t = true;
    for (JLabel b : j) {
        if (b instanceof JLabel) {
            t = t && labels.add(b);
    return t;


public boolean add(List j) {
    boolean t = true;
    for (JLabel b : j) {
        if (b instanceof JLabel) {
            t = t && labels.add(b);
    return t;

The first method will actually not compile UNLESS you add an appropriate type parameter to the enclosing class, whereas the second method WILL compile regardless of whether the enclosing class has a type parameter

Using Type Tokens to Retrieve Generic Parameters

Interesting read

Type Erasure : Why the information is dropped at compile time

We all know that, at compile-time, the compiler has full type information available but this information is intentionally dropped in general when the binary code is generated, in a process known as type erasure. 

So to put in different words this feature is offered by javac :)

one needs to realize the concept of type erasure derives from a need of compatibility with previous versions of java.
  • Source compatibility (Nice to have...)
  • Binary compatibility (Must have!)
  • Migration compatibility
    • Existing programs must continue to work
    • Existing libraries must be able to use generic types
    • Must have!
This is done this way due to compatibility issues... The intention of language designers was providing full source code compatibility and full binary code compatibility between versions of the platform. If it was implemented differently, you would have to recompile your legacy applications when you migrate to newer versions of the platform. The way it was done, all method signatures are preserved (source code compatibility) and you don't need to recompile anything (binary compatibility).

Sunday, January 13, 2013

Hibernate Query vs Criteria Performance

For some reason HQL seems more faster than Criteria

If we write a query like

select count(*) from R r where r.ISREPLACEDBY = 0 and r.STATUS='OK' and r.A = ? and r.C in (select distinct RC from CX cx where cx.FROMDATE >= ? and cx.FROMDATE <= ?)

Using both HQL and Criteria , then HQL will run much faster.

It seems that the criteria api creates new variable names each time a prepared statement is executed. The database (in our case, DB2) calculates then a new query execution plan each time the statement is executed. On the other hand, HQL uses the same variable names, allowing the database to re-use the query execution plans.

Another issue i noticed when using HQL , I was using HQL like

from employee emp where
at one place, and HQL like
from employee e where
at another place.

Although i was using a query level cache BUT still it was not being cached BECAUSE query cache was not being used here as the key for query cache is the query itself.

Sunday, December 23, 2012

Hadoop : Was the job really successful

An accurate determination of success is critical. 

The check for success primarily involves ensuring that the number of records output is roughly the same as the number of records input. Hadoop jobs are generally dealing with bulk real world data, which is never 100% clean, so a small error rate is generally acceptable.

It is a good practice to wrap your map and reduce methods in a try block that catches Throwables and reports on the catches.

Each call on the reporter object or the output collector provides a heartbeat to the framework,
                reporter.incrCounter( "Input", "total records", 1 );
                reporter.incrCounter( "Input", "parsed records", 1 );
                reporter.incrCounter( "Input", "number format", 1 );      
                reporter.incrCounter( "Input", "Exception", 1 );
                // better to use ENUMS to avoid spelling mistakes or extra spaces in end

if (format != 0) {
logger.warn( "There were " + format + " keys that were not "+ "transformable to long values");
/** Check to see if we had any unexpected exceptions. This usually indicates some significant problem, either with the machine running the task that had the exception, or the map or reduce function code. Log an error for each type of exception with the count.
if (exceptions > 0 ) {
                Counters.Group exceptionGroup = jobCounters.getGroup(
                TransformKeysToLongMapper.EXCEPTIONS );
                for (Counters.Counter counter : exceptionGroup) {
                                logger.error( "There were " + counter.getCounter()
                                + " exceptions of type " + counter.getDisplayName() );
if (total == parsed) {
      "The job completed successfully.");
// We had some failures in handling the input records. Did enough records process for this to be a
// successful job is 90% good enough?
if (total * .9 <= parsed) {
logger.warn( "The job completed with some errors, "+ (total - parsed) + " out of " + total );
System.exit( 0 );
logger.error( "The job did not complete successfully,"+" too many errors processing the input, only "
+ parsed + " of " + total + "records completed" );
System.exit( 1 );