Archive for June, 2012


Testing of Java Mail API

Posted by eugene as Java

I want to tell you about one simple method in this article on how to test code which uses Java Mail API.

The difficulties in testing such code are as follows:

  • message sending is implemented with the help of the static javax.mail.Transport.send method and it’s difficult to kill it
  • the mail server, through which the messages will be sent, is required for testing
  • it is required to fill out the client in order to compare what has been sent with what has been received
  • if several developers simulteneously launch tests then the duplicates will get to the server; to avoid the conflicts, it is necessary to complicate the tests, so that some unique identifier will be generated with the help of which you can search messages

You can definitely kill javax.mail.Transport.send. There are exist lots of mock-frameworks which allow to do this but there is the better and easier way!

The thing is that the Java Mail API developers made it possible to expand its service providers in the META-INF/javamail.providers file. I won’t get into the technical details because I do not know them:)

Fortunately there are people in the world who have already thought about it and wrote such mock-provider: To use it, it’s enough to add mock-javamail.jar to the classpath and that’s all.

This provider acts as the in-memory mail server. This means that all sent messages are stored in the mailbox from which they can be extracted. Each mailbox is associated with every unique email address that is created automatically when you send a message.

The acces to the mailbox is granted via the Mailbox class. This class is responsible for the storing of received messages the usual collection and allows you to add/remove messages. In addition, this class has two static factory methods which return the mailbox object passed to the mail addresss (if there is no mailbox, it will be created). Here’s an example of the call:

Mailbox mailbox = Mailbox.get("");

Another static method which in my point of view deserves attention is Mailbox.clearAll(). This method removes all the mailboxes. It is useful to call it before the beginning of the test so that the data will not interfere with the data from the previous ones. The example of use:


Below I’ll give an example of using this provider:

<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
</dependencies> </project>
package ru.nikisoft.article.mockjavamail;
import static org.junit.Assert.assertEquals;
import static;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
import javax.mail.Address;
import javax.mail.Message;
import javax.mail.Message.RecipientType;
import javax.mail.MessagingException;
import org.junit.Test;
import org.jvnet.mock_javamail.Mailbox;
public class MailSenderTest {
public void testSend() throws MessagingException, IOException     {
Mailbox mailbox = Mailbox.get("");
MailSender mailSender = new MailSender("", "");
mailSender.send("", "Subject: testSend", "Body: testSend");
assertEquals(1, mailbox.getNewMessageCount());
assertFromEquals(mailbox.get(0), "");
assertToEquals(mailbox.get(0), "");
assertSubjectEquals(mailbox.get(0), "Subject: testSend");
assertBodyEquals(mailbox.get(0), "Body: testSend");     }
private void assertFromEquals(Message message, String... expectedFrom) throws MessagingException     {
Set<String> expectedFromSet = new HashSet<String>(Arrays.asList(expectedFrom));
           for (Address actualFrom: message.getFrom())
		   if (expectedFromSet.size() > 0)
		   fail("From should contain these addresses: " + expectedFromSet);     }
		   private void assertToEquals(Message message, String... expectedTo) throws MessagingException     {
		   Set<String> expectedToSet = new HashSet<String>
		   for (Address actualFrom: message.getRecipients(RecipientType.TO))
		   if (expectedToSet.size() > 0)
		   fail("From should contain these addresses: " + expectedToSet);
		   private void assertSubjectEquals(Message message, String expectedSubject) throws MessagingException
		   String actualSubject = message.getSubject();
		   assertEquals(expectedSubject, actualSubject);
		   private void assertBodyEquals(Message message, String expectedBody) throws IOException, MessagingException
		   String contentType = message.getContentType();
		   if (contentType.contains("text/plain"))
		   String actualBody = (String) message.getContent();
		   assertEquals(expectedBody, actualBody);
		   fail("Unsupported content-type: " + contentType);
package ru.nikisoft.article.mockjavamail;
import java.util.Properties;
import javax.mail.Message.RecipientType;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;
public class MailSender {
private Properties properties;
public MailSender(String host, String from)
properties = new Properties();
properties.put("mail.transport.protocol", "smtp");
properties.put("", host);
properties.put("mail.from", from);
public void send(String to, String subject, String body) throws MessagingException
Session session = Session.getDefaultInstance(properties);
MimeMessage message = new MimeMessage(session);
message.setRecipients(RecipientType.TO, to);

Oracle. Character set

Posted by eugene as Oracle

For getting the current encoding, you can use the following queries:

SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;

The meaning of the encoding is contained in the NLS_CHARACTERSET field.


Arrays.asList() and remove()

Posted by eugene as Java

You’ve created the list with the help of:

List<String> test = Arrays.asList("a""b""c");

You’ve did some actions and decided to remove an element from the collection, say the first one:


And then suddenly:

Exception in thread "main" java.lang.UnsupportedOperationException
    at java.util.AbstractList.remove(

Why did it happen so?
Arrays.asList()  returns a collection inherited from AbstractList and filled out with the input data.
It returns ArrayList, not java.util.ArrayList but the one which is:

private static class ArrayList<E> extends AbstractList<E>

declared in the Arrays class.
Inherited from AbstractList, it does not override the add, remove, etc. This means that calls of these methods throw us the following:

public E remove(int index) {
   throw new UnsupportedOperationException();

Or in other words, unmodified collection.

The same we could implement by using the following:


Which also wrap the collection into the non-modifiable one returning the following when trying to amend:

public boolean remove(Object o) {
    throw new UnsupportedOperationException();

Simplifiying work with JPA by means of Spring Data JPA

Posted by eugene as Spring Data


It’s been several years since JPA appeared. Using the EntityManager is fascinating but the developers write nice API, and hide the details of work with the database. At the same, the common problem is duplication of implementation, when one and the same code is gradually migrating from one DAO to another, at best, the code is transferred to the abstract base DAO. Spring Data radically solves the problem – when you use it is just an API-level interfaces, the overall implementation is automatically created using AOP.

History of Spring Data

Despite the fact that the project has only recently reached version 1.0, it has enough rich history – it has been developing in scopes of the Hades project before.

Declaring DAO-Interface

So, first we need to declare the DAO-interface where we declare methods for working with the entity.

public interface UserRepository extends CrudRepository<User, Long> {

This code is sufficient for normal DAO with the CRUD-methods.

  • save – saves or updates the transferred entity.
  • findOne – looking for the entity by the primary key.
  • findAll – returns the collection of all entities
  • count – returns the number of entities
  • delete – removes the entity
  • exists – checks whether there is an entity with this primary key
  • Full list of methods declared in CrudRepository can be found in javadoc. If we do not need all the methods, it is possible to make the inheritance of the interface Repository and transferred to the successor only the methods of the interface CrudRepository, are needed.

    Support for sorting and paging

    Very often the required functionality is the ability to return only part of the entity from the database, for example, to implement paging in the UI. Spring Data is also good here and gives us the opportunity to add this functionality to our DAO. To do this, simply add the following method declaration to our DAO-Interface:

    Page<User> findAll(Pageable pageable);

    The pageable interface encapsulates information about the number of the requested page, page size, and sorting required.

    Looking for the data

    As a rule, the conventional DAO CRUD is not the end point and often the additional methods are required which return only those entities that satisfy the given conditions. In my opinion, Spring Data greatly simplifies the life in this area. For example, we need a method to find user by the login name and his e-mail address:

    User findByLogin(String login);
    User findByEmail(String email);

    It’s simple. If we need more complicated search terms, then it is also implemented. Spring Data supports the following operators:

    • Between
    • IsNotNull
    • NotNull
    • IsNull
    • Null
    • LessThan
    • LessThanEqual
    • GreaterThan
    • GreaterThanEqual
    • NotLike
    • Like
    • NotIn
    • In
    • Near
    • Within
    • Regex
    • Exists
    • IsTrue
    • True
    • IsFalse
    • False
    • Not
    • The space for imagination opens such an impressive list, so you can create the arbitrarily complex query. If you want the search results contain more than one entity, it is necessary to call a method as findAllByBlahBlah.

      Support for Spring MVC

      This part is based on the official documentation. Imagine that you develop a web application using Spring MVC. Then you will need to load the entity from the database using the parameters of HTTP-request. It may look as follows:

      public class UserController {
      	private final UserRepository userRepository;
      	public UserController(UserRepository userRepository) {
      		this.userRepository = userRepository;
      	public String showUserForm(@PathVariable("id") Long id, Model model) {
      		// Do null check for id
      		User user = userRepository.findOne(id);
      		// Do null check for user
      		// Populate model
      		return "user";

      First, you declare the dependency on the DAO, and secondly always call the findOne() method to load the entity. Fortunately, Spring allows us to convert the string value of the HTTP-requests in any desired type using either the PropertyEditor or ConversionService. If you are using Spring 3.0 and above, then you need to add the following configuration:

      <mvc:annotation-driven conversion-service="conversionService" />
      <bean id="conversionService" class="">
      <property name="converters">
      			<bean class="">
      				<constructor-arg ref="conversionService" />

      If you are using the older version of Spring, then you need the following configuration:

      <bean class="….web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter">
      <property name="webBindingInitializer">
      		<bean class="…">
      <property name="propertyEditorRegistrars">
      				<bean class="" />

      After these changes in the configuration, the controller can be rewritten as follows:

      public class UserController {
      	public String showUserForm(@PathVariable("id") User user, Model model) {
      		// Do null check for user
      		// Populate model
      		return "userForm";

      Note how the code is simplified and how well we got rid of its duplicating.


      Currently there’s not so much project documentation, but nevertheless, it exists:

      • Spring Data JPA – Reference Documentation
      • Spring Data Commons – Reference Documentation
      • Sources to github


      Spring Data simplifies life when using JPA. Recommended for use in your projects.


The attributes of the servlet request in Spring MVC

Posted by eugene as Spring Framework

In the article about the GET/POST parameters in the Spring MVC servlets there was shown how to access the parameters passed by the POST method. There was also an example of a class to parse the query line. Using of this class is convenient, if it;s not desirable to include the foreign libraries.

I’ll remind that for getting a list of the POST parameters it’s necessary to calculate the query line from the input thread and convert it into the set of parameters. In the simple web-applications where processing of the request is held in one or two handlers / controllers, you can use this method without any tricks. But what if the query in the web-application goes via several handlers, including the interceptor, controller, view, each of which is necessary to gain access to the parameters of POST? Obviously that reading the lines from the input thread and transforming it into the analysis of the parameters will take too much time in this application. How to make so that you can read and convert only one time? Tthe ability to set arbitrary attributes of the request can help in this case (HttpServletRequest).

The setAttribute(String name, Object value) and getAttribute(String name) methods are used for setting and getting the attributes, as well as listings of names of attributes getAttributeNames(). Attribute values will be available during the entire life cycle of the request since the installation of an attribute. The query itself properly follow all the processors. Note that the attribute names are recommended to be set in the same style as the package java names, for example, In this case the following mask attribute names are reserved – java. *, Javax. * And com.sun. *.

Thus, we can read and convert the query line only once into the parameters set – at the beginning of the life cycle of a query, to set the request attribute and then to use it whenever necessary.

In my point of view, a good way to implement this is to use the so-called interceptors (Interceptor), which can process the request in the early stages, including before entering it into the controller. Creation of an interceptor goes beyond this, you can read about it, for example, here.

Here is the brief example of how you can implement all of the above. QueryString is the class from the previous article.


	public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
		// Read the parameters line from the input thread
		String postQS = "";
		try {
			do {
				String tmp = request.getReader().readLine();
				if (tmp != null && tmp.length() != 0) {
					postQS += tmp;
				} else {
			} while (true);
		} catch (IOException e) {
		// Transforming the line into set of parameters and set the attribute
		QueryString qs = new QueryString(postQS);
		request.setAttribute("com.my_company.my_project.QueryString", qs);
		return true;

Next, we simply extract this attribute from the request and work with it.

The request processing after interceptor

	// Retrieve the attribute value
	QueryString qs = (QueryString) request.getAttribute("com.my_company.my_project.QueryString");
	// Get the 'some' parameter value
	String someValue = qs.getParameter("some");

The note about connection JSTL into JSP

Posted by eugene as Web Applications

In some cases when compiling JSP you can get the following message:

According to TLD or attribute directive in tag file, attribute value does not accept any expressions.

What is the problem and how to resolve it?

This error occurs because the JSP specification uses the JSP 1.2 (and JSTL 1.0), so it does not understand the expression of EL (Expression Language). In order to use the correct version of the specification in your application you need to do the following two things:

1. Refer to the correct specification of the servlet in the deployment descriptor:

<web-app version="2.5"

2. Specify the correct link to JSTL in JSP-page.
In JSP 1.2:

<%@ taglib uri='' prefix='c'%>

In JSP 2.0:

<%@ taglib uri='' prefix='c'%>

Comparison of specification

Version 2.3 2.4 2.5
Deployment schema
Servlet version>2.3 2.4 2.5
JSP Version 1.2 2.0 2.0
JSTL Version 1.0 1.1 1.1
Link to Core JSTL
Tomcat Version 4.x 5.x 6.x

Seven security settings in web.xml

Posted by eugene as Web Applications

There are a lot of articles about how to configure authentication and authorization in the deployment web.xml descriptor. Instead of having to talk once again about how to configure roles, to protect web-resources and to set different types of authentication, let’s look at some common mistakes in the security settings in the web.xml file.

This article is a translation of Article Seven Security (Mis) Configurations in Java web.xml Files.

1. The Error Pages Are Not Configured

By default Java web-applications send detailed error messages directly to the browser. These messages display the server versions, stack trace, in some cases, the Java code pieces are imprinted in the stack-traces. This information is a real boon for hackers to gather information about a potential victim.

Fortunately, it’s very easy to configure web.xml so that instead of the standard pages with the stack-trace returned the user-specified pages. If you use the below configuration, then when an error code 500 occurs, a “good” page will be shown. Similarly, you can configure the display of error pages for the other HTTP status codes (eg, 404).


In addition, it’s necessary to prohibit the issuance of the pages with the stack-trace in web.xml with the help of the exception-type tag. The page specified by the user should be displayed instead of this page. As can be seen from the example, we specify the type of the Throwable exception. Throwable is the the base class for all exceptions and errors in Java, so this configuration guarantees us that none of the stack trays will go to the user.


However, your code can display the stack-trace if you wrote something like this:

	try {
		String s = null;
	} catch (Exception e) {
		// don't do this!
		e.printStackTrace(new PrintWriter(out));

Do not forget to use the “correct” logging in addition to the proper setting of the web.xml deployment descriptor.

2. Authentication and Authorization Bypass

The following code shows how to set up web-based access control so that everything in secure directory should only be available to users with admin role.


It is usually assumed that if there is such a configuration, there only allowed GET and POST requests, but it is not true. By default, any method is valid and the above configuration is not limiting access to the resources by using methods other than GET and POST. In this configuration, only states that if an appeal to the resource is done using GET or POST, it is necessary that the user has been authenticated. Arshan Dabirsiaghi in his article describes the problem and gives examples of how to use the HTTP-HEAD method, or even completely bogus TEST or JUNK methods which are not listed in the web.xml configuration, to bypass authentication and authorization.

Fortunately, the solution to this problem is very simple. Just remove all the http-method elements from web.xml, then the settings will be properly applied to all requests.

3. SSL Is Not Set Up

SSL should be used everywhere, where the transmission of confidential data is carried out. Of course, you can set up SSL on the Web server and calm down on this but you can also configure SSL for the Web application level as soon as the appropriate SSL-keys will be installed on the application server. It’s very simple.


4. Not Using the Secure Flag

Many sites use SSL for authentication and then go into the non-SSL mode and all communication with the site is implemented over the insecure protocol. This makes the session cookies (such as JSESSIONID) vulnerable to the “capture of the session” attack (session hijacking). To avoid this, you can create cookies with the secure flag which ensures that the browser will not send these cookies via the non-SSL protocol. This flag is put up as follows:


5. Not Using the HttpOnly Flag

Cookies can be created with the flag HttpOnly which ensures that they can not be accessed by the client script. This helps to protect you from some of the most common XSS attacks. Put up the flag as follows:


The HttpOnly flag allowed you to set the useHttpOnly attribute in the Tomcat 5.5 and 6.x versions for the particular application. This attribute is located in the Context item in the server.xml file. By default, this attribute has been disabled in the Tomcat 5.5 and 6.x versions, but the useHttpOnly attribute has a true default value in Tomcat 7. Thus, in Tomcat 7, even if you set flag in web.xml


your JSESSIONID will be HttpOnly as previously if you do not change this behavior in the server.xml file.

6. Using of URL Parameters to Track the Session

The tracking-mode item in the Servlet 3.0 specification allows you to specify if JSESSIONID will be stored in the cookies or it will be passed as the URL-parameter. If the session ID is stored in the URL-parameter, then it can be saved in the browser history on the proxy server, logs, etc. This makes our application more vulnerable to the “capture session” attack. Instead the Servlet 3.0 specification offers us JSESSIONID stored in cookies. This is done using the following configuration:


7. The Timeout Is Not Set for the Session

Users like the long-live sessions because they are convenient for use. Hackers like the long-live sessions because they give them more time to conduct attacks such as “session capture”, or Cross Site Request Forgery. What to choose – the convenience and security is up to you but when you’ll be confident with the lifetime of the session, you can configure it as follows:


In this example, the session will live within 15 minutes of user inactivity. If the session-timeout is not configured, then the Servlet specification requires that the timeout was used which is chosen by the supplier of the container (it’s 30 minutes for Tomcat). If to indicate a negative number or 0 by way of the length of the session life, it will live “forever.” Such an approach is not recommended.

The idle time can also be configured using the setMaxInactiveInterval method of the HttpSession class. In contrast to the session-timeout element this method takes the time value in seconds.


Building and deploying of secure applications requires consideration of the system from different sides. Settings of the runtime as important as the settings of the web-application. If you want to share with your methods of the security settings of web-applications in Java, please leave your comments.


Overview of new features of Hibernate 3.5 and JPA 2.0

Posted by eugene as Hibernate

Java Persistency API (JPA) 2.0 is also known as JSR-317 was released recently (December 10, 2009) and until the last week the only ORM which fully implements the specification was EclipseLink. This is a great framework which is most likely due to the reviews online, works faster than Hibernate. However, last week Hibernate 3.5 appeared, it fully implements the JPA 2.0 specification. In this article I will briefly tell you about the new possibilities of JPA 2.0 and Hibernate 3.5.

Here are a few major innovations:

  • orphanRemoval option;
  • ElementCollection annotation;
  • CollectionTable annotation.

Connection of Hibernate 3.5 to project

We will connect Hibernate to the project, as always, with the help of Maven 2. To make it work, we need to connect the Jboss repository:

	<name>JBoss Maven Repository</name>

Now you need to connect all the necessary dependencies:


Here I also connected the Apache Derby JDBC-driver, since I use it for testing.
As a result, you should have such a dependency tree:

I use Hibernate 3.5 in the JSF-project, so you can also see the JSF dependencies on the picture. You can just not pay attention to them.

The Hibernate 3.5 documentation says that the Hibernate kernel is supplied with the Hibernate annotations and Entity Manager, however, after downloading the dependencies with Maven I noticed that the classes which are responsible for the work of annotations and Entity Manager in Core are absent. After some dancing with a tambourine, it was found that when pulling the dependencies from the JBoss Maven repository, these packages are really missing in the kernel, but if you download the zip-file from the site of JBoss (more precisely, from SourceForge), then the annotations packets and ejb are really present in Core. That’s why I added the Hibernate Annotations and Hibernate Entity Manager dependencies to POM.

As it turned out, this is due to the fact that these projects are now live in one SVN-project and they have the same release cycle, and hence the version of all these libraries is identical. That’s why I carried the Hibernate library version to pom.xml into properties.

Standard Properties

In the earlier specifications of JPA (up to 2.0) there were not specified any standardized names for the properties in persistence.xml, so each vendor of the JPA implementation had to determine the names of properties. A small set of standard properties is defined in JPA 2.0. Now, when you configure persistence.xml you can be use as the names of the provider properties of implementation (in this case, Hibernate), so and the standardized names.

They look like this:

1. javax.persistence.jdbc.driver (In Hibernate: hibernate.connection.driver_class)
2. javax.persistence.jdbc.user (In Hibernate: hibernate.connection.username)
3. javax.persistence.jdbc.password (In Hibernate: hibernate.connection.password)
4. javax.persistence.jdbc.url (In Hibernate: hibernate.connection.url)

I think there is nothing necessary to explain as the property names speak for themselves.

Here is my persistence.xml

<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="" xmlns:xsi="" xsi:schemaLocation="" version="2.0">
<persistence-unit name="topcodeCorePersistenceUnit" transaction-type="RESOURCE_LOCAL">
<property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.ClientDriver" />
<property name="javax.persistence.jdbc.url" value="jdbc:derby://localhost:1527/scriptkiddie_db" />
<property name="hibernate.dialect" value="org.hibernate.dialect.DerbyDialect" />
<property name="javax.persistence.jdbc.user" value="root" />
<property name="javax.persistence.jdbc.password" value="root" />
<property name="hibernate.archive.autodetection" value="class" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.format_sql" value="true" />
<property name="" value="create-drop" />

New features and annotations mapping

There are several new annotations in JPA 2.0.

Removing of orphans by using the orfanRemoval attribute

Orphan is an object whose parent object was deleted. There were no hibernate DELETE_ORPHAN equivalent in cascade type in the previous version of JPA. In JPA 2.0 this behavior (removing of orphans) can be defined using the orphanRemoval attribute of ManyToOne and OneToMany annotations. The specification defines two different scenarios of the behavior of orphanRemoval:

  • If the target entity was detached from the entity-owner collection (one-to-many) or null is assigned to the link, then it (the target entity) will be removed from the database during the execution of flush.
  • If the parent entity has been removed, the target entity will also be deleted. In other words, if orphanRemoval = true, then there is no sense to set cascade = REMOVE as the cascade deletion will occur as a result of applying the orphanRemoval = true rule.

Here is the example of annotation:

	@OneToMany(mappedBy = "customer", cascade = CascadeType.PERSIST, fetch = FetchType.LAZY, orphanRemoval = true)
	@BatchSize(size = 100)
	private Set<Order> orders = new HashSet<Order>();

Mapping the collection of items with the help of the ElementCollection annotation

Another one innovation in JPA 2.0 is the equivalent to the CollectionOfElements annotation in Hibernate: the @ElementCollection annotation. With the help of this annotation you can map the collection of simple types or embedded (embeddable) objects. Here is an example of a simple mapping:

public class Customer {
	private Collection<String> hobbies = new HashSet<String>();

Here the collection of lines is mapped as the hobbies attribute to the Customer entity. Since we did not specify any mapping parameters, the following will occur:

  • 1.The table name will be “customer_hobbies”
  • 2.The table will consist of two columns: “customer_id” – the client ID, which has a hobby and “hobbies” — the hobby value. The line will be created for each item in the table.

By default, the column name for the embedded-data is generated from the embedded-class attribute names or the name of the collection (in our case, hobbies) for simple types. This can be changed noting the property with the built-in class type with the @AttributeOverride or @Column annotations for simple types:

public class Customer {
	@Column(name = "HOBBY_DATA")
	private Collection<String> hobbies = new HashSet<String>();

Set up a table name, you can use the new annotation @CollectionTable:

public class Customer {
	@CollectionTable(name="HOBBIES", joinColumns=@JoinColumn(name="CUSTID"))
	private Collection<String> hobbies = new HashSet<String>();

And as for embedded types:

	@CollectionTable(name = "CUST_ADDITIONAL_ADDRS")
	private List<Address> additionalAddresses = new ArrayList<Address>();

That’s all for today.


How to befriend Hibernate with Spring and to provide the transaction management via @nnotations

Posted by eugene as Hibernate, Spring Framework

How to befriend Hibernate with Spring and to provide the transaction management via @nnotations

The large and complex task completed at work, and you’d like to digress a bit before the resolution of the next one and to share something with you, dear readers. Today’s post will be the series like “ones for the kids.” Let’s talk about a bunch of Spring-Hibernate, the DAO layer and dynamic transaction management.

SpringFramework is quite complicated and interesting thing. In particular, it includes the org.springframework.orm.hibernate3 package, which provides the interaction between SpringFramework and Hibernate ORM.

Let’s create a simple console application (not to bother with defining servlets and other overhead’s), which writes something into the database.

Accordingly, first of all we’ll define the entity with which we work. Let’s call it casually: MyEntity.

The code essentially looks like this:

package com.scriptkiddieblog.example.entity;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import org.hibernate.annotations.GenericGenerator;
public class MyEntity implements Serializable {
	private static final long serialVersionUID = 382157955767771714L;
	@Column(name = "uuid")
	@GeneratedValue(generator = "system-uuid")
	@GenericGenerator(name = "system-uuid", strategy = "uuid")
	private String id;
	@Column(name = "name")
	private String name;
	public MyEntity() {
	public MyEntity(String id, String name) { = id; = name;
	public String getId() {
		return id;
	public void setId(String id) { = id;
	public String getName() {
		return name;
	public void setName(String name) { = name;

I’ll remind you that the @Entity, @Id, etc. annotations are related to JPA and replace Hibernate-mapping.

We won’t work with the entity directly but through DAO. Using of DAO is one of the established patterns of work with SpringFramework. Having defined the bean that implements DAO you can easily inject it into the beans that implement the business logic of the application, and thus completely separate the business logic from data manipulation. Our DAO will be implemented via the following interface:

package com.scriptkiddieblog.example.dao;
import com.scriptkiddieblog.example.entity.MyEntity;
public interface IEntityDao {
	public void save(MyEntity entity);

For the example, we’ll define a single method – save which will retain the entity into the database. DAO implementation is rather primitive:

package com.scriptkiddieblog.example.dao;
import com.scriptkiddieblog.example.entity.MyEntity;
public class EntityDao extends HibernateDaoSupport implements IEntityDao {
	public void save(MyEntity entity) {

We inherit from the HibernateDaoSupport class which encapsulates the work with the Hibernate Session, and Hibernate Session Factory gives us a simple API to interact with Hibernate. I recommend an article that explains how to properly organize the DAO layer in your application.

Now let’s go to the classes which will implement the business logic. In our case, the business logic will be simple – we’ll just save the entity.

ImyEntityService interface:
import com.scriptkiddieblog.example.entity.MyEntity;
public interface IMyEntityService {
	public void saveEntity(MyEntity entity);
Implementation — MyEntityService class:
<pre lang="Java">
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
import com.scriptkiddieblog.example.dao.IEntityDao;
import com.scriptkiddieblog.example.entity.MyEntity;
@Transactional(readOnly = true)
public class MyEntityService implements IMyEntityService {
	private IEntityDao dao;
	public void setDao(IEntityDao dao) {
		this.dao = dao;
	@Transactional(readOnly = false, propagation = Propagation.REQUIRED)
	public void saveEntity(MyEntity entity) {;

This class is the most interesting thing in our program. We need to wrap the saveEntity method in the transaction. There is the @Transactional annotation for this purpose to which you can annotate a method or a class. The behavior of the transaction is set by the parameters of this annotation. The main parameter is readOnly which points to the possibility or impossibility of modifying the state of the database and propagation which sets the strategy for the transaction (not to create a transaction, create new, join to the existing one, etc.). In addition to these parameters you can specify a timeout, the isolation level, classes and types of exceptions where it’s both required and not to do rollback.

More information about the parameters and their values can be found in the official guide for SpringFramework.

Actually, now we have to consider the configuration of Spring-context which will be stored in the applicationContext.xml file. The file will be considered in parts, in small portions. First of all, let’s create the “fish” of the file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="" xmlns:xsi="" xmlns:aop="" xmlns:tx=""

Pay attention! It is important to register all namespaces and the path to the schemes correctly, otherwise the configuration will simply not be parsed.

So first, let’s add the necessary configuration files into the context, in this case – where we will store the settings for connecting to DBMS.
The SpringFramework class contains the org.springframework.beans.factory.config.PropertyPlaceholderConfigurer class for the work with the configuration files. The layout will be as follows:

	<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" value="" />

Then you should determine the data source – the bridge between DBMS and Hibernate. I prefer to use this wonderful apache.commons.dbcp library for this.

	<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${jdbc.driverClassName}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />

Once identified the source of the data, it’s time to describe the factory that will build the Hibernate-sessions.

There is the org.springframework.orm.hibernate3.LocalSessionFactoryBean class for it. We’ll describe this bean as follows:

	<bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:/hibernate.cfg.xml" />
<property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration" />
<property name="hibernateProperties">
<prop key="hibernate.dialect">${hibernate.dialect}</prop>

All specific settings of Hibernate will be stored in the hibernate.cfg.xml file, the dialect – in the file. Note that since we define the mapping by annotations, then the org.hibernate.cfg.AnnotationConfiguration class should work with such configuration.

We are connected to the database and the Hibernate-session is created. It’s time to point the application that it needs to dynamically manage the transaction. What is meant by “dynamically manage transactions?” This means that we do not need to write code that creates/closes/rolls back the transaction and to place it wherever you need. We need only to pass the HibernateTransactionManager class some rules of creation/completion of the transactions, and the rest it will take upon itself.

It’s clear that all this stuff works through AOP. The rule is a correspondence between the method and type of the transaction being created. This means that when we enter into the method (just before the start of implementation of the method code) – you need to create a transaction, and before leaving the method (after implementation of the the last statement of the method instruction) – commit the transaction. Well, and additionally you can describe due to which types of exceptions the transaction should be rolled back.

There are two basic ways of defining the rules: using of the Spring AOP notation in the Spring xml-configs, and using of annotations in Java-code. Each method has its advantages and disadvantages, but that’s the topic of another article. We’ll consider how to manage transactions using the annotations.

To manage a transaction in Spring there is a namespace tx where is defined, in particular, the tx:annotation-driven directive including the mechanism for transaction management via annotations. You can read about the parameters of this directive in section 9.5.6. of the document.

Let’s define the transaction manager as follows:

	<tx:annotation-driven transaction-manager="txManager" />
	<bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />

What remains to be determined are beans for the DAO layer and the business logic layer:

	<bean id="entityDAO" class="com.scriptkiddieblog.example.dao.EntityDao">
<property name="sessionFactory" ref="sessionFactory" />
	<bean id="entityService" class="">
<property name="dao" ref="entityDAO" />

Finally I’ll give the example of the Main class code which runs the application:

package com.scriptkiddieblog.example;
import com.scriptkiddieblog.example.entity.MyEntity;
public class Main {
	public static void main(String[] args) {
		ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext("applicationContext.xml");
		IMyEntityService service = (IMyEntityService) ctx.getBean("entityService");
		MyEntity entity = new MyEntity();

The code is not complicated. First, we load the application context, then we get tge required bean from the context (in this case – “entityService”. Well, and then we use the bean according to the intended purpose — by means of it, save the entity to the database.

Actually, I think that the configuration via annotations is simpler than xml, and it’s even easier to read. In general, the interaction of beans could be also configured the bean with the help of annotations, Spring allows this for quite long time. On the topic of configuring Spring-beans through annotations, please read the articles on habrahabr: this and this.

Now you know how to connect DBMS and Hibernate to SpringFramework, to prpvide the dynamic transaction management, to describe the DAO layer and to connect DAO to the business logic. In fact, we have created the “fish” of the application and now we can indefinitely increase its functionality.

UPD 23.02.2011: Source code of the example (Maven-Project, GitHub).


About the construction of service-oriented architecture at enterprise

Posted by eugene as Software Design

About the construction of service-oriented architecture at enterprise

In my point of view one of the general, if not the main task while constructing the service-oriented architecture at enterprise is provision of the possibility of continuous development of the integrated system and its adapting to the changing business of customer-company business. And besides, ideally the development of the system must be implemented by the forces of customer IT-department with the minimal attraction of specialists building the system. I.e. There shouldn’t be so that for connection of new business system the customer-enterprise will need to invite consultants again which will begin from remodelling the overall integration of the existing systems, and then will fill something resulting in the new one. Or will not insert…

In order to resolve the given task it’s necessary to provide the implementation of the following conditions:

  • The transition from the integration on the basis of point-to-point to the integration through the enterprise service bus (ESB).
  • Ensuring the independence of the integration of transformed message formats.
  • Documentation in sufficient quantity and quality.
  • The presence of the monitoring process, transfer and process messages.

The transition to the integration through the enterprise service bus (ESB)

The systems integration on the basis of point-to-point has one significant drawback ― low scalability. The number of possible connections between systems can be calculated using the formula N(N-1)/2. If you integrate three systems, then the number of connections between them is also equal to three. In this case, the integration on the basis of point-to-point is justified. If there are 10 systems, the number of connections is equal to 45, and this is the fairly large value.

The problem is not just that here are a lot of connections and they can be confusing, but the fact that they all need to ensure that both the level of data sharing (single-unit can be used as a protocol – HTTP, the other – RMI, the third – JMS, and the N-th – Oracle AQ, while initially the source system at all except to put a message to the database table cannot do anything), and at the level of data format (single-receiver anticipates for the sharing file 1C as XML, other – IDOC, the third ― some customised certain XML, and N-th ― the SOAP message). Business applications should provide support for various protocols and data formats in the case of integration on the basis of point-to-point which is usually expensive and sometimes impossible in principle.

However, when there are few systems, for example three, everything is not enough clear. Now there are three, and probably it’s planned to connect the new system and not just one. With these customer plans it is desirable to acknowledge in advance.

Ensuring the independence of the integration of transformed message formats

It is possible to encapsulate the message transformation in ESB from the format of the source system to the format of the receiving system. Actually this is one of the main functions of ESB. The question is how to do it correctly.

Often in the literature devoted to ESB, you can find such a term as “canonical format”. Usually this term is understood as the maximum generic message format that allows you to send messages from any system to another regardless of the format in which the message came into the bus and of the message format for the recepient system. The development of both the canonical format and transformations from/to it is quite time-consuming task, so the desire to save money on these works is quite comprehensive. When this can be done?

First, if the main receivers of the messages are the systems of one manufacturer with the “native” exchange format (such as SAP IDOC), then as a canonical format you can use the “native” format. The transformation from this format will be carried out only in the adapters of the source systems .

Second, if the communications are the main sources of the same manufacturer, with its “native” format for the exchange, then as a canon, you can use this format. The transformation of the format will be carried out only in the adapter system of the recipient.

Third, if some systems are weakly or do not connected at all with the other, and connected only with each other, then maybe it makes no sense to do double transformation (first in canonical format, and then – in the format of the receiver system), especially if they are the systems of one manufacturer with a single exchange format. However, there is a need to keep in mind that if you will need to provide the interaction of these systems with any other tomorrow, you’ll have to develop a transformation, in some cases – very non-trivial.

Generally on the question, whether to use or not the canonical format you should proceed from the fact what is easier: to make the transformation to/from a certain intermediate format or directly from the original message to the required ones. The decision should be based on an analysis of the existing system landscape and the road map drawn up of the project. Knowledge of the customer plans of the further development of the system will also be useful. If it is expected the future replacement of the source systems, you will either have to lay the transformation of messages in data adapters systems or yet to introduce a canonical format.

Documentation in sufficient quantity and quality

Unfortunately, the amount of entropy in the universe is increasing. Over time, especially in a large system integration with many dozens of endpoints it will be very difficult to understand. Rising complexity will lead to that the developers involved in the creation of new or supporting the existing applications will develop their integration points and processes instead of using and adapting the existing ones. Actually this fact leads to a leveling role for SOA and to the fact that after some time to implement SOA should again have an updated view of the landscape.

To avoid the use of leveling embedded solutions needed in addition to technical means (the use of different registries and repositories), and provide a number of organizational measures such as compulsory and qualitative documentation of the landscape. The documentation should reflect the full information on all types of services (utilitnym, services, applications, business services and service processes):

  • Name of the service;
  • Integration style (batch, real-time);
  • Calls style – how the call of service/application is provided (by the trigger, using the Change Data Capture, etc);
  • Template exchange messaging (Request-Reply, One-Way, Pub-Sub);
  • Business communications domain;
  • Message format (XML, CSV, SOAP, IDOC, etc.);
  • The exchange protocol (HTTP, MQ, RMI, etc.);
  • Service contract (WSDL, WADL, etc.);
  • Security (authentication, authorization, encryption, Non-repudiation);
  • The level of security implementation (at the level of the communication channel, at the level of communication);
  • Implementation of Security (WS-Security, SSL, etc.).

The schemes of the business processes that involve services should be separately reflected, the order of their calls and the rules of the interprating and received as a result of its work information.

But illusions are not worth it, the growth of entropy also applies to the documentation. In order to evolve the service-oriented architecture in harmony it is necessary to apply range of measures which should include both organizational and technical measures.

The presence of the monitoring process, transfer and process messages

Another of the advantages of implementing a service-oriented architecture, for example, in the form of ESB, is to provide a through monitoring of the transmission and processing of messages. Through monitoring enables to solve the following tasks:

  • Provision of SLA;
  • Acquisition and analysis of statistics regarding use of services;
  • Control over the passage of each message over the bus;
  • Control over the message processing in the business systems.

Provision of SLA

If you encounter problems in the integration solution, or at any other breach of SLA, such as a sharp decrease in the rate of integration bus, the pass-through monitoring subsystem must immediately notify the system administrators. System administrators should be able to obtain information about the performance of each component of the integration solution and react appropriately: fix the broken connection to the business system, switch to a backup link, to solve the performance problems with database and so on, depending on the situation.

Preparation and analysis of statistics regarding use of services

Maintaining the statistics of services use includes not only monitoring each of the call cases of service in order to detect unauthorized access or unexpected case of the use of business systems by company’s business processes. Statistics should be also maintained in order to optimize the loading of all enterprise information systems. For example, if the monitoring reveals that some service is not used, then you should find out the root cause: is it unavailable for any reason or was it gradually excluded from all company business processes, but wasn’t this fact noticed at once? It is possible that such service should be put out of operation with the relevant organizational and personnel decisions. Another option is to optimize the information infrastructure of the enterprise. For example, if the load on the service A is much lower than estimated, it is possible to transfer this service to the less expensive equipment, using the freed ones for other purposes.

Control over the passage of each message on the bus

Above all this monitoring is required to find the lost messages, as well as to find an answer to the question: why did the business-systems A and B contain the inconsistent data. Messaging transfer integration to bus proceeds in several stages, such as: registration, routing, delivery. If due to changing business needs there is produced adaptation of the bus, then typically this adaptation occurs on a “need to get it done yesterday”. When working in such an environment it is possible to forget something or make something wrong. In particular, not properly set up the routing, resulting in a jam message on the bus. To identify these problems and adequately correct them there must be a monitoring tool for the passage of each message through the bus. Report on the passage of messages on the bus must contain at least the following information:

  • The message ID in the source system;
  • The message type;
  • The soruce system;
  • The receiver system;
  • The message ID in the system, the receiver (if any);
  • Status reports (on which phase is in, or not transmitted to the receiving system, if processed);
  • Information about the error (in case if while the transmission or processing an error happened).

Control over the message processing in the business systems

The correct message delivery into the business system is not the end of the performance of the business process. The message must still be processed in the system: parsed, the corresponding business objects should be created according to its data or other actions should be performed, the result should be returned (in case of an exchange on the basis of the request-response). At the same time there could be situations when the message can not be correctly processed for various reasons, such as incorrect format or absence of the needed data in the destination (in fact – the violation of referential integrity), and also errors in the business logic of the system. Apparently ensuring the proper handling of messages requires readjustment of the exchange. In any case, to determine the need for such a migration will not be possible if the business system will “swalowl” the cases of wrong message handling. To prevent this from happening – you need to configure the business system notification of the bus about the results of processing of incoming messages.

There are two possible variants depending on the type of exchange:

  • Synchronous exchange. Since the exchange is simultaneous, i.e. the message is processed immediately after delivery to the receiving system and the result is formed, it is required to notify the bus about the results the message processing. The bus having received a result, must record the corresponding status of the message processing.
  • Asynchronous. In this case it is possible that the result of message processing is not formed at all (the so-called one-way exchange). At the same time, as mentioned above, it is required to notify the bus on the results of message processing. You must provide another channel of communication between the business system and the bus where the notifications will pass through: was the message handled successfully or the message processed unsuccessfully. If an error occurs when processing the message it is necessary to transmit the text and the code of this error.

In the comments I suggest to express your views on the application integration, implementing SOA/ESB, or to ask any questions. Thank you for your attention.

P.S. Reserving the right to make changes to this article in order to increase relevance and consistency of the content.