Saturday, July 15, 2017

SoftReference vs WeakReference


In Java; order from strongest to weakest, there are: Strong, Soft, Weak and Phantom
  • A Strong reference is a normal reference that protects the referred object from collection by GC. i.e. Never garbage collects.
  • A Soft reference is eligible for collection by garbage collector, but probably won't be collected until its memory is needed. i.e. garbage collects before OutOfMemoryError.... is eligible for collection by garbage collector, but probably won't be collected until its memory is needed for another use ( garbage collector won't collect the reference until the memory is available.)
  • A Weak reference is a reference that does not protect a referenced object from collection by GC. i.e. garbage collects when no Strong or Soft refs.
  • A Phantom reference is a reference to an object is phantomly referenced after it has been finalized, but before its allocated memory has been reclaimed.
Analogy: Assume a JVM is a kingdom, Object is a king of the kingdom, and GC is an attacker of the kingdom who tries to kill the king(object).
  1. When King is Strong, GC can not kill him.
  2. When King is Soft, GC attacks him but King rule the kingdom with protection until resource are available.
  3. When King is Weak, GC attacks him but rule the kingdom without protection.
  4. When king is Phantom, GC already killed him but king is available via his soul.

Weak references are collected eagerly. If GC finds that an object is weakly reachable (reachable only through weak references), it'll clear the weak references to that object immediatel

SoftReferences on the other hand are good for caching external, recreatable resources as the GC typically delays clearing them.

Principle : weak reference is related to garbage collection. Normally, object having one or more reference will not be eligible for garbage collection.

The above principle is not applicable when it is weak reference. If an object has only weak reference with other objects, then its ready for garbage collection.

Let's look at the below example: We have an Map with Objects where Key is reference a object.


public class Test {

    public static void main(String args[]) {
        HashMap<Employee, EmployeeVal> aMap = new
                       HashMap<Employee, EmployeeVal>();

        Employee emp = new Employee("Vinoth");
        EmployeeVal val = new EmployeeVal("Programmer");

        aMap.put(emp, val);

        emp = null;

        System.gc();
        System.out.println("Size of Map" + aMap.size());

    }
}

Now, during the execution of the program we have made emp  = null. The Map holding the key makes no sense here as it is null. In the above situation, the object is not garbage collected.

WeakHashMap
WeakHashMap is one where the entries (key-to-value mappings) will be removed when it is no longer possible to retrieve them from the Map.

Let me show the above example same with WeakHashMap

public class Test {

    public static void main(String args[]) {
        WeakHashMap<Employee, EmployeeVal> aMap = 
                    new WeakHashMap<Employee, EmployeeVal>();

        Employee emp = new Employee("Vinoth");
        EmployeeVal val = new EmployeeVal("Programmer");

        aMap.put(emp, val);

        emp = null;

        System.gc();
        int count = 0;
        while (0 != aMap.size()) {
            ++count;
            System.gc();
        }
        System.out.println("Took " + count
                + " calls to System.gc() to result in weakHashMap size of : "
                + aMap.size());
    }
}

Output : Took 20 calls to System.gc() to result in aMap size of : 0.

WeakHashMap has only weak references to the keys, not strong references like other Map classes. There are situations which you have to take care when the value or key is strongly referenced though you have used WeakHashMap. This can avoided by wrapping the object in a WeakReference.



public class Test {

    public static void main(String args[]) {
        HashMap<Employee, EmployeeVal> map = 
                      new HashMap<Employee, EmployeeVal>();
        WeakReference<HashMap<Employee, EmployeeVal>> aMap = 
                       new WeakReference<HashMap<Employee, EmployeeVal>>(
                map);

        map = null;

        while (null != aMap.get()) {
            aMap.get().put(new Employee("Vinoth"),
                    new EmployeeVal("Programmer"));
            System.out.println("Size of aMap " + aMap.get().size());
            System.gc();
        }
        System.out.println("Its garbage collected");
    }
}

Soft References

Soft Reference is slightly stronger that weak reference. Soft reference allows for garbage collection, but begs the garbage collector to clear it only if there is no other option.

The garbage collector does not aggressively collect softly reachable objects the way it does with weakly reachable ones -- instead it only collects softly reachable objects if it really "needs" the memory. Soft references are a way of saying to the garbage collector, "As long as memory isn't too tight, I'd like to keep this object around. But if memory gets really tight, go ahead and collect it and I'll deal with that." The garbage collector is required to clear all soft references before it can throw

Spring Transaction Management

When do you want to Transaction Management Programmatically ?
Programmatic transaction management is usually a good idea only if you have a small number of transactional operations. 

On the other hand, if your application has numerous transactional operations, declarative transaction management is usually worthwhile.

Transaction management can be used at class level or method level
You may apply the @Transactional annotation at the method level or the class level. When applying this annotation to a class, all of the public methods within this class will be defined as transactional. Although you can apply @Transactional to interfaces or method declarations in an interface, it's not recommended because it may not work properly with class-based proxies (i.e., CGLIB proxies).

Transaction Propogation

When a transactional method is called by another method, it is necessary to specify how the transaction should be propagated. For example, the method may continue to run within the existing transaction, or it may start a new transaction and run within its own transaction.

Solution : A transaction's propagation behavior can be specified by the propagation transaction attribute. Spring defines seven propagation behaviors.


 @Transactional
    public void checkout(List<String> items, String user) {
        for (String item : items) {
            bookShop.purchase(item, user);
        }
    }

Propagation : REQUIRED (default)



However, if the purchase() method is called by a non-transactional method and there's no existing transaction in progress, it will start a new transaction and run within its own transaction.

Propagation : REQUIRES_NEW 



Isolation Level Problem : When multiple transactions of the same application or different applications are operating concurrently on the same dataset, many unexpected problems may arise. You must specify how you expect your transactions to be isolated from one another.

Solution :
The problems caused by concurrent transactions can be categorized into four types:
  • Dirty read: For two transactions T1 and T2, T1 reads a field that has been updated by T2 but not yet committed. Later, if T2 rolls back, the field read by T1 will be temporary and invalid.
  • Nonrepeatable read: For two transactions T1 and T2, T1 reads a field and then T2 updates the field. Later, if T1 reads the same field again, the value will be different.
  • Phantom read: For two transactions T1 and T2, T1 reads some rows from a table and then T2 inserts new rows into the table. Later, if T1 reads the same table again, there will be additional rows.
  • Lost updates: For two transactions T1 and T2, they both select a row for update, and based on the state of that row, make an update to it. Thus, one overwrites the other when the second transaction to commit should have waited until the first one committed before performing its selection.
Levels  
  • READ_UNCOMMITTED is the lowest isolation level that allows a transaction to read uncommitted changes made by other transactions.
  • similarly there is READ_COMMITTED

Setting the Rollback Transaction Attribute
By default, only unchecked exceptions (i.e., of type RuntimeException and Error) will cause a transaction to roll back, while checked exceptions will not. Sometimes, you may wish to break this rule and set your own exceptions for rolling back.

Solution
The exceptions that cause a transaction to roll back or not can be specified by the rollback transaction attribute. Any exceptions not explicitly specified in this attribute will be handled by the default rollback rule (i.e., rolling back for unchecked exceptions and not rolling back for checked exceptions)


@Transactional(
            propagation = Propagation.REQUIRES_NEW,
            rollbackFor = IOException.class,
            noRollbackFor = ArithmeticException.class)
    public void purchase(String isbn, String username) throws Exception{
   throw new ArithmeticException();
             //throw new IOException();
    }

Ques : will the transaction be rolled back automatically ?


@Transactional
public void operation() {
    entityManager.persist(new User("Dima"));
    throw new Exception();
}


Ans is No

Although EJB container default behavior automatically rolls back the transaction on a system exception (usually a runtime exception), EJB CMT does not roll back the transaction automatically on anapplication exception (that is, a checked exception other than java.rmi.RemoteException). While the Spring default behavior for declarative transaction management follows EJB convention (roll back is automatic only on unchecked exceptions), it is often useful to customize this behavior.

Okaaay… So, if you expect that checked exception will be thrown in your code, you better use rollbackFor in this way:


@Transactional(rollbackFor = Exception.class)
public void operation() {
    entityManager.persist(new User("Dima"));
    throw new Exception();
}

Ques : Will rollback happen is there is exception in below method

@Transactional
    public Result doStuff(){
        Result res = null;
        try {
          // do stuff
        } catch (Exception e) {

        }
        return res ;
    }


If there is an exception in the method doStuff the transaction isn't rolled back.

declarative approach



@Transactional(rollbackFor={MyException1.class, MyException2.class, ....})
public Result doStuff(){
   ...
}











Core Java : BrushUp

Point 1 : Always override hashcode() when you override equals()
there is a contract for equals : asset

Point 2 : minimize accessibility of classes and members

Point 3: In public classes, use accessor methods, not public fields

Point 4: Minimize mutability

Point 5: Favor composition over inheritance
  1. One reason of favoring Composition over Inheritance in Java is fact that Java doesn't support multiple inheritance.
  2. Composition offers better test-ability of a class than Inheritance. If one class is composed of another class, you can easily create Mock Object representing composed class for sake of testing.Inheritance doesn't provide this luxury. In order to test derived class, you must need its super class
  3. Though both Composition and Inheritance allows you to reuse code, one of the disadvantage of Inheritance is that it breaks encapsulation. If sub class is depending on super class behavior for its operation, it suddenly becomes fragile. When behavior of super class changes, functionality in sub class may get broken, without any change on its part

Item 6 : Prefer interface over Abstract Class
1) can implement more than one interface
2) abstract class may contain state (data members) and/or implementation (methods)

http://codeofdoom.com/wordpress/2009/02/12/learn-this-when-to-use-an-abstract-class-and-an-interface/
  1. *) Abstract classes allow for default default function definition : want to modify common code and everyone should use new code
  2. *) It is a situation of "Is-A" vs "Can-Do-this". Objects that extends an Abstract class "Is-A" base class. Objects that implement "Can-Do-This".

Item 7 : Why use Enums
  1. If you use enums instead of integers (or String codes), you increase compile-time checking and avoid errors from passing in invalid constants, and you document which values are legal to use.... ex pass an int , then it can take any integer but you have 5 document types then you can make enum of 5 types
  2. switch statement
  3. can use as singleton

Item 8 : Marker Interfaces are used to define types
What purpose does the Cloneable interface serve?

When JVM sees a clone() method being invoked on an object, it first verifies if the underlying class has implemented the 'Cloneable' interface or not. If not, then it throws the exception CloneNotSupportedException.


public Object clone() throws CloneNotSupportedException {
 if (this implements Cloneable)
     return nativeCloneImpl();
 else
     throw new CloneNotSupportedException();
}


Item 9 : Beware the performance of string concatenation

The string concatenation operator (+) is a convenient way to combine a few strings into one. It is fine for generating a single line of output or for constructing the string representation of a small, fixed-size object, but it does not scale. Using the string concatenation operator repeatedly to concatenate n strings requires time quadratic in n. It is an unfortunate consequence of the fact that strings are immutable (Item 15). When two strings are concatenated, the contents of both are copied.

For example, consider the following method that constructs a string representation of a billing statement by repeatedly concatenating a line for each item:

// Inappropriate use of string concatenation - Performs horribly!

public String statement() {
    String result = "";
    for (int i = 0; i &lt; numItems(); i++)
        result += lineForItem(i);  // String concatenation
    return result;
}

This method performs abysmally if the number of items is large. To achieve acceptable performance, use a StringBuilder in place of a String to store the statement under construction. (The StringBuilder class, added in release 1.5, is an unsynchronized replacement for StringBuffer, which is now obsolete.)


public String statement() {
    StringBuilder b = new StringBuilder(numItems() * LINE_WIDTH);
    for (int i = 0; i &lt; numItems(); i++)
        b.append(lineForItem(i));
    return b.toString();
}

The difference in performance is dramatic

Item 10 : Java Parameter
https://stackoverflow.com/questions/3108182/using-parameter-that-implements-multiple-interfaces-pre-generics

Item 11 : difference between T & ?
  • ? is a wildcard and means any sublass of ONEITEMInterface including itself.
  • T is a specific implementation of ONEITEMInterface in this case.
  • Since ? is a wildcard, there is no relation between your ? in the class declaration and the ? in your method declaration hence it won't compile. Just List getONEITEM(); will compile though.

The first scenario means the entire class can handle exactly one type of Bar per instance.


interface Foo<T extends Bar> {
     List<T> get();
}

The second scenario allows each instance to operate on any subtype of Bar


interface Foo {
     List get()
}