Archive for September, 2012

30
Sep

The example of using of Hibernate as JPA Provider in conjunction with the Spring Framework

Posted by eugene as Hibernate, Spring Framework

It’s no secret that Hibernate is the “de facto” standard in the ORM-world. At the same time the production “de jure” standard is JPA – it’s also difficult to argue. Moreover, starting with the 3.6.* version Hibernate fully supports JPA standard (of different versions) and it means that there is a possibility to develop on the basis of production standard, not refussing from the favourite instrument. Nevertheless there are few compact and light examples for understanding in the network which illustrate this possibility, especially when it’s about the conjunction with Spring Framework. The article will be devoted exactly to it (to conjunction of Spring + Hibernate as JPA).

Introduction

As I’ve written above there are not so many examples, most often these will be the examples of work of Spring + Hibernate, or Spring + JPA, or Hibernate as JPA provider, etc.

The most simple and available description of using Hibernate JPA you’ll find right in distributive {HIBERNATE_HOME}/documentation/quickstart/en-US/html/hibernate-gsg-tutorial-jpa.html (the exact example is located here:{HIBERNATE_HOME}/project/documentation/src/main/docbook/quickstart/tutorials/entitymanager/). Since in my article it’s taken as a basis and it’s extremely desirable by the time you need these lines to read you have already acknowledged this example.

[FYI:]SpringSource has a separate project which supplements the opportunities of the basic Spring ORM and focused on the support of JPA – Spring Data JPA: http://static.springsource.org/spring-data/data-jpa/docs/1.1.0.RC1/reference/html/

We’ll concentrate on the conjunction variant of Spring Framework + JPA where Hibernate is represented in the role of JPA realization. Consequently we’ll need the last (at the moment of writing the article) versions of Spring Framework 3.1 and Hibernate 4.0.1.

Background

Download the source code examples (http://it.vaclav.kiev.ua/wp-content/uploads/2012/02/spring-jpa-hib-example.zip), deploy the archive and import the project into Eclipse (or any other IDE up to you). Then open the {spring-jpa-hib-example}/lib/readme.liblist.txt file and having learnt the list of necessary libraries, add them into {spring-jpa-hib-example}/lib/. Except of the several libraries you’ll find them in the Spring and Hibernate libraries respectively.

The additional libraries which you’ll need for work with the example:

For the ease of demonstration we’ll use a slight “in-memory” database variant, namely HSQLDB. Please note that while guys from JBoss like H2, their colleagues from SpringSource prefer HSQLDB for the same purposes. In general, these databases are interchangeable in most cases but we’ll focus on HSQLDB as it’s much easier to configure in the context of Spring Framework.

So, all infrastructure issues are acknowledged – the sources are present, all the libraries are loaded and available for using, you can get started studying the example.

Studying the spring-jpa-hib-example project

As I’ve already written above, the example which is as the basis of it is supplied with Hibernate 4.0.1 in which the org.hibernate.tutorial.em.Event class is used as @Entity. I hope that you’ve already acknowledged with it. Otherwise – it’s time ot do it now! Moreover, we’ll not stop on it for details – this is POLO annotated accordingly.

Let’s review the modified by me persistence.xml:

<persistence xmlns="http://java.sun.com/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
             version="2.0">
 
    <persistence-unit name="org.hibernate.tutorial.jpa">
        <description>
            Persistence unit for the Spring JPA tutorial with Hibernate using as JPA provider
        </description>
 
        <!-- <class>org.hibernate.tutorial.em.Event</class> -->
 
        <properties>
            <property name="hibernate.show_sql" value="true" />
            <property name="hibernate.hbm2ddl.auto" value="create" />
        </properties>
 
    </persistence-unit>
 
</persistence>

As you can see this is the standard persistence.xml according to JPA v.2.0 in which there’s practically nothing except some lines which parameterize Hibernate. Even org.hibernate.tutorial.em.Event is commented out. Why this is done you’ll understand below. Special interest in this file you’ll find in persistence-unit – recall it as you’ll need it further in order to configure one of the Spring beans. Let’s review it:

application-context.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    xmlns:p="http://www.springframework.org/schema/p"
    xmlns:jdbc="http://www.springframework.org/schema/jdbc"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.1.xsd">
 
    <context:annotation-config />
 
    <context:component-scan base-package="org.hibernate.tutorial.em" />
 
    <jdbc:embedded-database id="dataSource">
    </jdbc:embedded-database>
 
    <bean id="lcemf"
        class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="loadTimeWeaver">
            <bean
                class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" />
        </property>
        <property name="dataSource" ref="dataSource"></property>
        <property name="persistenceUnitName" value="org.hibernate.tutorial.jpa" />
        <property name="persistenceProviderClass" value="org.hibernate.ejb.HibernatePersistence"/>
    </bean>
 
    <bean id="eventDao" class="foo.bar.dao.EventDaoImpl" />
</beans>

Let’s review it in parts putting down trivial things:

<context:component-scan base-package="org.hibernate.tutorial.em" />

This configuration fragment will allow Spring, in conjunction with Hibernate, to find the @Entity class itself. If you remember we’ve commented its declaration from persistence.xml.

<jdbc:embedded-database id="dataSource">
</jdbc:embedded-database>

And you’ve probably haven’t coped with this block despite the fact it appeared quite long time ago in Spring, the special attention it got in the last framework versions. This tag will allow you to automatically initialize the so-called “embedded database” where HSQLDB / H2 / DERBY can be used as that. Most of all HSQLDB is preferred in SpringSource but you can quickly enough go to any of it (how to do it? see chapter 13.8 Embedded database support, Spring Reference v.3.1). I’ll only add that this solution is not only used in production! It is designated for enlightment of prototypes creation and testing which we’ll use. After completion the “context rise” you’ll get not only ready “dataSource” beanbut the raised database in memory, fully ready for work.

<bean id="lcemf"
    class="org.springframework.orm.jpa.с">
    ...
    <property name="dataSource" ref="dataSource"></property>
    <property name="persistenceUnitName" value="org.hibernate.tutorial.jpa" />
    <property name="persistenceProviderClass" value="org.hibernate.ejb.HibernatePersistence"/>
</bean>

And in this configuration block the magic of work with Hibernate is hidden as with the JPA provider in Spring. Not long time ago Spring represented set of XXXTemplate classes for simplifying work with different ORM, but starting from v.3.0 this practice has stopped the JpaTemplate class is already deprecated. Instead of it, there’re used several variants of EntityManagerFactoryBeans in the third string. We’ll use the most powerful of them, i.e. LocalContainerEntityManagerFactoryBean. As you can see from the context we transmit dataSource into it, persistenceUnitName and the class which implements the PersistenceProvider interface, in the case with the fourth Hibernate this is HibernatePersistence (Why haven’t guys from JBoss called it HibernatePersistenceProvider is still the puzzle for me, obviously they get some extra satisfaction to puzzle their users).

In the same time, in the Spring context we’re raising implementation of our dao: foo.bar.dao.EventDao, we’ll continue work with it literally in several seconds:

public interface EventDao {
 
    public abstract Collection<Event> loadEvents();
 
    public abstract void persistEvent(Event event);
 
}

Please find the implementation of this interface in the foo.bar.dao package. A little secret is also hidden in it which is not quite enough transparently represented in the Spring documentation.

private EntityManagerFactory emf;
 
@PersistenceUnit
public void setEntityManagerFactory(EntityManagerFactory emf) {
    this.emf = emf;
}

If you followed the train of thought attentively then you should have found that the EntityManagerFactory bean type in context hasn’t raised and in fact it couldn’t simply appear. Nevertheless we’ve created the LocalContainerEntityManagerFactoryBean instance and its conjunction with the @PersistenceUnit annotation allows Spring to get EntityManagerFactory and to implement its injection into our DAO.

All the further work with Hibernate from the Spring Framework environment is now done transparently, as with the JPA realization:

In order to make sure in the example efficiency, find the {spring-jpa-hib-example/test} / foo.bar.dao.EventDaoImplTest class and run it as a JUnit-test (required to use JUnit v.4.*).

@Test
public void testBasicUsage() {
 
    EventDao eventDao = this.context.getBean(EventDaoImpl.class);
    Collection<Event> events = eventDao.loadEvents();
    log.info("Events count: " + events.size());
    log.info(events);
 
    // checking that there are no any events in fresh db schema
    assertEquals(eventCnt, events.size());
 
    // create and persist new event
    eventDao.persistEvent(new Event( "Our very first event!", new Date() ));
    eventCnt++;
 
    events = eventDao.loadEvents();
    log.info("Events count: " + events.size());
    log.info(events);
 
    // checking that there is only one event that has been created recently
    assertEquals(eventCnt, events.size());
}

If the test completed successfully, then it means you’ve got raised Hibernate as the JPA provider in context of Spring.

In the following editions I’ll get you acquainted with not less interesting Spring Framework opportunities.

28
Sep

The Hibernate Dynamic SQL Cache project

Posted by eugene as Hibernate

Description

This project is successfully being used in several projects by myself and it showed a good efficiency. The reason for creation of the project was the inefficient cache mechanism in Hibernate, at least the developers openly write about this in documentation.

  • HQL-cache and also cache of entity-objects collections will be reset during every data update, at least in one of the tables which are present in the query. During very often table updating the use of such sort of caches leads to 0.
  • SQL-cache holds only result of the first query call in the memory and is not reset.
  • That is why we’ve got an idea of the dynamically updating cache. The result of the query is updated when performing either INSERT or DELETE of the object in the database. In order to use the given solution it’s needed to amend a bit the approach for the query construction, and also for using of different collections in entity-objects.

    • The query must return only object id. Due to the id you can easily download the object from cache. (That’s why you will get two calls to the cache what is always quicker than query execution to the database).
    • The query must be only via the invariable object fields.
    • Instead of using the collections in entity-object you should use the corresponding SQL query with the dynamic update.

    Absolutely any cache provider can be used (Infinispan, Hazelcast, EHCache …)

    The project is published under Apache License 2.0

    Example of using

    Spring is used for clarity of the example, although the project doesn’t depend on it, any other container can be used.

    Hibernate configuration. It is important to note the use of its own hibernate.cache.query_cache_factory where the whole logics of cache update management is performed:

    <session-factory>
     
          ...
    <property name="hibernate.cache.query_cache_factory">com.corundumstudio.hibernate.dsc.DynamicQueryCacheFactory</property>
    <property name="hibernate.cache.region.factory_class">org.hibernate.cache.infinispan.InfinispanRegionFactory</property>
    <property name="hibernate.cache.use_second_level_cache">true</property>
    <property name="hibernate.cache.use_query_cache">true</property>
     
          ...
     
    </session-factory>

    Let’s register which will track the main operations by all ‘entities’.

    @Configuration
    public class QueryCacheListenerConfig {
     
        @Bean
        public QueryCacheEntityListener createCacheListener() {
                return new QueryCacheEntityListener();
        }
     
        @PostConstruct
        protected void init() {
                EventListenerRegistry registry = sessionFactory.getServiceRegistry().getService(EventListenerRegistry.class);
                registry.getEventListenerGroup(EventType.POST_UPDATE).appendListener(createCacheListener());
                registry.getEventListenerGroup(EventType.POST_INSERT).appendListener(createCacheListener());
                registry.getEventListenerGroup(EventType.POST_DELETE).appendListener(createCacheListener());
        }
     
    }

    The example of DAO-service. com.corundumstudio.hibernate.dsc.CacheCallback – is the callback by means of which it will serve the “insert” and “delete” operations on the object (in this example, SimpleEntity), so that the results of the query will be always “fresh”.

    @Service
    public class SimpleEntityDao {
     
        private final String queryRegionName = "SimpleEntity_Query";
        private final String query = "SELECT id FROM SimpleEntity WHERE phone = :phone";
     
        @Autowired
        private QueryCacheEntityListener queryListener;
        @Autowired
        private SessionFactory sessionFactory;
     
        @PostConstruct
        protected void init() {
     
            // cache callback через который будет обслуживать операции
            // "insert" или "delete" на entity, чтобы результаты запроса были всегда "свежими"
     
         CacheCallback<SimpleEntity> handler = new CacheCallback<SimpleEntity>() {
     
          @Override
          protected void onInsertOrDelete(InsertOrDeleteCommand command,
            SimpleEntity object) {
           command.setParameter("phone", object.getPhone());
           command.setUniqueResult(object.getId());
          }
     
         };
         queryCacheEntityListener.register(SimpleEntity.class, queryRegionName, handler);
        }
     
        @Transactional
        public SimpleEntity getEntityByPhone(String phone) {
                Session session = sessionFactory.getCurrentSession();
                SQLQuery sqlQuery = session.createSQLQuery(query);
                sqlQuery.addScalar("id", LongType.INSTANCE);
                sqlQuery.setCacheable(true);
                sqlQuery.setCacheRegion(cacheRegionName);
                sqlQuery.setParameter("phone", phone);
                Long idResult = (Long) sqlQuery.uniqueResult();
                return session.get(SimpleEntity.class, idResult);
        }
     
        ...
     
        create, delete methods...
     
    }

    Now when you call SimpleEntityDao.getEntityByPhone, at the first time the request to the database will be performed. The subsequent calls of the method will return a value from the cache, and if the SimpleEntity object with the desired phone value will be added to the database, it will also appear as a result of this request. And conversely, if the object with such ‘phone’ value will be removed, then the result of the query will return null.

    If the result of the query returns the list of the objects, then in this case while deletion/creation of the object its id will be removed/added in the list.

26
Sep

What is BlazeDS

Posted by eugene as Flex

Developing RIA application in Flex, the http server should be used for data processing. For these purposes the usual Apache server (with the installed PHP) will be suitable. The communication of the server and client sides will be carried out via the HTTP requests where data will be transmitted in XML or JSON.

The Flash technology has its own data transfer format – AMF3. The described above configuration can be worked up for supporting this format by installing the AMFPHP library on Apache.

The problem is that PHP language is enough for small applications. But if you plan to create a larger system, you need to think about switching to a more suitable language for it – such as Java. In addition, a great server exists for the interaction of programs in this language with the Flex clients – BlazeDS.

BlazeDS is the web application that is run in servlet container or Java application server. BlazeDS is a set of services that are managed through the JMX agent. Its features are as follows:

  • Implements the removed Java methods by request of the Flex application.
  • Translates Java objects into AS3 objects while returning the result of implementing of Java method.
  • Translates AS3 objects into Java objects while calling the removed Java method from the Flex application.
  • Serves teh connections between Flex application and Java application.
  • Delivers data on the client without a request.

The articles which will describe the creation of RIA applications in details by using BlazeDS will appear on the blog soon.

24
Sep

Parameterization of JUnit tests with the help of TwiP

Posted by eugene as Java

The TwiP (Test with Parameters) library allows us to parameterize. JUnit tests have rather more advantages than the built-in theories in JUnit:

  • allows to parameterize different tests and parameters by different sets of values
  • allows to define a method which will return one or massive of values by which the test should be parameterized

The example of using TwiP is represented below:

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
 
import net.sf.twip.TwiP;
import net.sf.twip.Values;
 
import org.junit.Test;
import org.junit.runner.RunWith;
 
@RunWith(TwiP.class)
public class TestWithTwiP
{
    public static Integer[] oddNumbers = {1,3,5,7,9};
    public static Integer[] evenNumbers = {2,4,6,8,10};
 
    public static Integer[] methodNumbers()
    {
        List<Integer> numbers = new ArrayList<Integer>();
        numbers.addAll(Arrays.asList(oddNumbers));
        numbers.addAll(Arrays.asList(evenNumbers));
        return numbers.toArray(new Integer[10]);
    }
 
    public static List<Integer> methodNumbersList()
    {
        List<Integer> numbers = new ArrayList<Integer>();
        numbers.addAll(Arrays.asList(oddNumbers));
        numbers.addAll(Arrays.asList(evenNumbers));
        return numbers;
    }
 
    @Test
    public void testOddNumbers(@Values("oddNumbers") int oddNumber)
    {
        System.out.println("Odd number: " + oddNumber);
    }
 
    @Test
    public void testEvenNumbers(@Values("evenNumbers") int evenNumber)
    {
        System.out.println("Even number: " + evenNumber);
    }
 
    @Test
    public void testTwoArguments(@Values("oddNumbers") int oddNumber, @Values("evenNumbers") int evenNumber)
    {
        System.out.println("Odd number: " + oddNumber);
        System.out.println("Even number: " + evenNumber);
    }
 
    @Test
    public void testMethodArgument(@Values("methodNumbers") int number)
    {
        System.out.println("Number: " + number);
    }
 
    @Test
    public void testMethodArgumentReturnsList(@Values("methodNumbersList") int number)
    {
        System.out.println("Number: " + number);
    }
 
    @Test
    public void testCombinedMethodArgument(@Values("methodNumbers") int number, @Values("oddNumbers") int oddNumber)
    {
        System.out.println("Number: " + number);
        System.out.println("Odd number: " + oddNumber);
    }
}

All methods and variables which are the data sources must be be public and static (public static) and must return ony the referential types, primitives are not allowed.

In order to specify which data source will be used for the particular parameter the @Values annotation within which the name or class variable is used.

If you specify a few parameters test will be called for all possible combination of these parameters – their Cartesian product. If you want to call the test and all parameters to be moving simultaneously (i.e. from the first source – element 0, and from the second source – element 0 and so on), then it’s necessary to specify an extra class which will contain pairs of values (or more) and to pass it into the test.

22
Sep

Deployment automation

Posted by eugene as Web Applications

Automation

In this article I want to show how to automate the deployment process of web-application (WAR archive) to the application Oracle WebLogic 10.3.x server by using Ant.

WebLogic provides several Ant tasks for this case which we’ll use, that’s why we’ll need either pre-installed WebLogicor you can get the necessary libraries from the existing installation. In another article I’ll describe on how to do that.

Getting started

The typical deployment process looks like the following:

  • Verify if the application is installed on the server
  • If the application is installed, it should be removed
  • Install new application

WebLogic has the Ant-task in stock which allows to implement the following actions (http://docs.oracle.com/cd/E14571_01/web.1111/e13706/wldeploy.htm):

  • deploy
  • undeploy
  • redeploy
  • listapps
  • start
  • stop

At the first glance it seems that redeploy is all you need but this solution doesn’t work in case if the application installation onto the server is performed manually via the administrative console. In this case server returns an error that the application is being installed from another path than the installed on the server at the moment.

That’s why I recommend to use the sequential action calls: undeploy and deploy as described below:

<?xml version="1.0" encoding="UTF-8"?>
<project name="WebLogic Deploy" basedir="." default="deploy">
    
<property name="weblogic.application.name" value="MyApplication"/>
    
<property name="weblogic.application.archive" value="Application.war"/>
    
<property name="weblogic.home" value=""/>
    
<property name="weblogic.domain" value="domain"/>
    
<property name="weblogic.adminurl" value="t3s://adminurl:7777"/>
    
<property name="weblogic.targets" value="target"/>
 
    <taskdef name="wldeploy" classname="weblogic.ant.taskdefs.management.WLDeploy">
        <classpath>
            <fileset dir="${weblogic.home}" includes="server/lib/weblogic.jar"/>
        </classpath>
    </taskdef>
 
    <target name="undeploy">
        <wldeploy name="LEO" action="undeploy" adminurl="${weblogic.adminurl}" userconfigfile="weblogic.id" userkeyfile="weblogic.key" targets="${weblogic.targets}"/>
    </target>
 
    <target name="deploy" depends="undeploy">
        <wldeploy name="${weblogic.application.name}" source="${weblogic.application.name}" action="deploy" remote="true" upload="true" adminurl="${weblogic.adminurl}" userconfigfile="weblogic.id" userkeyfile="weblogic.key" targets="${weblogic.targets}"/>
    </target>
</project>

Let’s walk through the parameters:

  • action – the perfomed command (deploy, undeploy, start, stop and so on)
  • name – application name which we install/remove
  • adminurl – URL of the administrative console, but as a protocol we specify not http:// but t3 (or t3s:// if you use https://)
  • user – user name under which we will install/remove applications
  • password – user password
  • targets – the list of server names (node), clusters on which we install the application. The name can be seen in the administrative console (Environment => Servers, the Name column or Cluster)
  • source – path to archive of the installed application
  • upload – flag that tells us that the source application archive needs to be loaded on the server, in another case the archive needs to be uploaded on the server by other ways, for instance, using SCP
  • remote – is the flag which tells us that the application server is installed on another physical server
  • Verification for installed application

    The above scenario is only suitable for situations where the installed application is already located on the server, moreover installed by wideploy rather than manually, via the administrative console. otherwise WebLogic will refuse to install the application. What to do in case of the initial installation of the application?

    To make the installation script working independently of the presence of the installed applicatio it’s necessary to add a check. Then the sequence of actions is as follows:

    • Verify if the application is installed on the server
    • If the application is installed, then remove the previous version
    • Install new version of the application

    What is said that is done! For verifying we’ll use the wlconfig Ant-task with the help of which you can configure the server (for example, create the data sources) and perform different administrative actions, including the list of installed applications.

    The modified Ant-script is represented below:

    <?xml version="1.0" encoding="UTF-8"?>
    <project name="WebLogic Deploy" basedir="." default="deploy">
        
    <property name="weblogic.application.name" value="MyApplication"/>
        
    <property name="weblogic.application.archive" value="Application.war"/>
        
    <property name="weblogic.home" value=""/>
        
    <property name="weblogic.domain" value="domain"/>
        
    <property name="weblogic.adminurl" value="t3s://adminurl:7777"/>
        
    <property name="weblogic.targets" value="target"/>
     
        <taskdef name="wlconfig" classname="weblogic.ant.taskdefs.management.WLConfig">
            <classpath>
                <fileset dir="${weblogic.home}" includes="server/lib/weblogic.jar"/>
            </classpath>
        </taskdef>
     
        <taskdef name="wldeploy" classname="weblogic.ant.taskdefs.management.WLDeploy">
            <classpath>
                <fileset dir="${weblogic.home}" includes="server/lib/weblogic.jar"/>
            </classpath>
        </taskdef>
     
        <target name="check">
            <echo>Checking if application is already deployed</echo>
            <wlconfig url="${weblogic.adminurl}" userconfigfile="weblogic.id" userkeyfile="weblogic.key">
                <query pattern="bwag:Name=MyApplication,Type=Application" property="weblogic.application.installed"/>
            </wlconfig>
            <echo message="${weblogic.application.installed}"/>
        </target>
     
        <target name="undeploy" depends="check" if="weblogic.application.installed">
            <wldeploy name="LEO" action="undeploy" adminurl="${weblogic.adminurl}" userconfigfile="weblogic.id" userkeyfile="weblogic.key" targets="${weblogic.targets}"/>
        </target>
     
        <target name="deploy" depends="undeploy">
            <wldeploy name="${weblogic.application.name}" source="${weblogic.application.name}" action="deploy" remote="true" upload="true" adminurl="${weblogic.adminurl}" userconfigfile="weblogic.id" userkeyfile="weblogic.key" targets="${weblogic.targets}"/>
        </target>
    </project>

    For checking the availability of the installed application with the help of wlconfig, the folllowing query is implemented:

    bwag:Name=MyApplication,Type=Application

    If the application is installed, then the weblogic.application.installed property will be installed by means of which we define if it’s necessary to run the undeploy task or not (see the if property of the undeploy task in the example above).

20
Sep

Generating random numbers using Random.org

Posted by eugene as Java

Who tries to generate random numbers by using the arithmetic methods, that does certainly live in sin. – John von Neumann

There is such a good service random.org which has been repeatedly referred on Habr. The main object of the website is the generation of random numbers using the atmospheric noises. On the same website you can find the test results and comparison of radom and pseudo-random generators with the explanation what is better and why. This article describes a simple library for using website API.

Random.org

There are a lot of helpful functions that use generation of random numbers: coin toss, dice, cards shuffling, getting the lottery combinations, sound generation, bitmaps and so on. There is also the custom generation by the predefined distribution. In general, all this is not difficult, but the interesting fact is that the generation is performed with the using of atmospheric noises and it somehow magically allows you to get a better random than Random.nextInt(). Then I thought it would be nice to have in stock a library with such API and I’ve decided to write it.

Search

Before you write, you need to look for, perhaps someone already did that. Yes. Did.

  • Simple Random.Org Java Api – a simple lib with the method for generating integers but it pulls the Apache HTTP Client dependency, as much as kilobytes.
  • Java TRNG client – here all is more serious. 40 kilobytes and generation of numbers by means of two(!) websites. Yes, the lack is that the lib has been created for cryptography and hence the handling is performed by bits, bytes and all is complicated at all.

Basically, I’ve decided to write my own.

API

Random.org represents the primitive HTTP GET API but nothing else is needed. Totally there are 4 types of operations.

Integer Generator

Generates random integer numbers in the predefined range. For example, here is request for tossing two dices looks like:

http://www.random.org/integers/?num=2&min=1&max=6&col=1&base=10&format=plain&rnd=new

Sequence Generator

Generates a sequence with all the unique integer numbers in the predefined range. Basically, what does Collections.shuffle(). For example, here is how the request for shuffling the cards looks like:

http://www.random.org/sequences/?min=1&max=52&col=1&format=plain&rnd=new

String Generator

Generates a random string of the predefined size with the opportunity to choose the set of symbols (numbers, lower case, upper case). Here is how you can generate a nickname to your person password:

http://www.random.org/strings/?num=1&len=12&digits=on&upperalpha=on&loweralpha=on&unique=on&format=plain&rnd=new

Quota Checker

Well, as you know this is not free. Although they provide millions of free bits per day. This is actually more than enough. And you can find out how much is left by going over the following link:

http://www.random.org/quota/?format=plain
If you clicked three previous links, you’ve already spent ~1500 bits.

Errors

In case of success, the server returns code 200, fail – code 503. These are all the errors.

There was written the library of five classes for this API in Java where the call of all the above methods is represented in a simple and comprehensive way.

// dice
IntegerGenerator ig = new IntegerGenerator();
ig.generate(1, 6, 2);
// cards shuffling
SequenceGenerator sg = new SequenceGenerator();
sg.generate(1, 52);
// new password
StringGenerator strg = new StringGenerator();
strg.generate(12, 1, true, true, true, true);
// how much bits left
QuotaChecker qc = new QuotaChecker();
qc.quota();

And perhaps that is all. On github you can find the sources and download the lib with the original randomorg name (6 kilobytes).

20
Sep

SQL. How to choose the thing which doesn’t exist.

Posted by eugene as No-SQL Databases

I’ve coped with a problem. Let’s think we have DUMMY_TABLE. How should I choose a field from the table by id or the ‘NOT EXISTS’ line if there is no such a record? By the same time you can use only SQL, no procedure programming.

As the result I’ve got the following query:

SELECT ID from DUMMY_TABLE
  where id=1
UNION
SELECT
 'NOT EXISTS' from DUAL WHERE NOT EXISTS(
    select ID from DUMMY_TABLE WHERE ID=1)

It works! And how would it look like in a more elegant way?

20
Sep

Apache Maven – Web Application

Posted by eugene as Web Applications

If you aready know what is Maven and you want to build a simple modular web application (if not – you can read the topic and foundations). The theme of this topic is how to configure pom.xml, add a modular to the project, connect the plugins, deploy the Apache Tomcat application server.

Intro

Let’s take for example a simple web application. It consists of modules: model (Entity and Dao classes), service (services), web (servlets). The project and its modules contain its own pom.xml file. Eclipse will serve as an IDE and all is also fair for NetBeans, IDEA. It is assumed that the maven plugin is already added to IDE, the Tomcat server.

Our steps:

  • create the Maven project
  • create the Maven module
  • set-up maven, tomcat
  • deploy on the server

Creation the Maven project

Let’s create Maven Project in our IDE, it will ask to set the options, choose Skip archetype selection, Group id = com.mycompany, Artifact Id = myproject.

Connecting the modules:

<groupId>com.mycompany</groupId>
<artifactId>myproject</artifactId>
<version>0.0.1-SNAPSHOT</version>
<modules>
  <module>model</module>
  <module>service</module>
  <module>web-servlet</module>
</modules>

By default maven will connect JRE version 1.4, we need 1.6, connect the plugin:

<build>
<plugins>
<plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>2.5.1</version>
      <configuration>
        <source>1.6</source>
        <target>1.6</target>
      </configuration>
    </plugin>
  </plugins>
</build>

The module can contain the dependencies of the other libraries, good practice is to specify the dependency version in dependencyManagement of the project, not in pom.xml of the module:

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>javax.servlet</groupId>
      <artifactId>javax.servlet-api</artifactId>
      <version>3.1-b01</version>
    </dependency>
  </dependencies>
</dependencyManagement>

Creation of the Maven module

Right-clicking the project – create Maven Module. In pom.xml of this module we specify the project:

<packaging>war</packaging>
<parent>
  <groupId>com.mycompany</groupId>
  <artifactId>myproject</artifactId>
  <version>0.0.1-SNAPSHOT</version>
</parent>

You can also specify our modules in dependencies (in this case service). Again if you’ve specified the library version in pom.xml you can (and it’s even better) not to specify the version in pom.xml

<dependencies>
  <dependency>
    <groupId>com.mycompany</groupId>
    <artifactId>service</artifactId>
    <version>0.0.1-SNAPSHOT</version>
  </dependency>
<dependency>
  <groupId>javax.servlet</groupId>
  <artifactId>javax.servlet-api</artifactId>
</dependency>

Also we’ll need the plugin in order deploy the application on the server by running the Maven-> build coomand in IDE. Let me note the url lines will be different for Tomcat 6 and 7:

<build>
  <finalName>project</finalName>
<plugins>
<plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>tomcat-maven-plugin</artifactId>
      <version>1.1</version>
      <configuration>
        <url>http://localhost:8080/manager/html</url><!-- ??? Tomcat 7 !-->
        <!--<url>http://localhost:8080/manager</url>!--><!-- ??? Tomcat 6 !-->
        <server>tomcatServer</server>
<path>/project</path>
      </configuration>
    </plugin>
  </plugins>
</build>

Configuring Maven, Tomcat

The Maven plugin should know login, Tomcat password in order to connect to the Tomcat manager. Let’s open (if no, create) a file in home directory /.m2/settings.xml, after that do IDE-> Maven-> User Settings-> Update Settings. Here is settings.xml:

<settings>
  <servers>
    <server>
      <id>tomcatServer</id>
      <username>admin</username>
<password>password</password>
    </server>
  </servers>
</settings>

Let’s add rights for the manager in the Tomcat conf/tomcat-users.xml directory:

<tomcat-users>
  <role rolename="manager"/>
  <role rolename="manager-gui"/>
  <role rolename="admin"/>
  <user username="admin" password="password" roles="admin,manager,manager-gui"/>
</tomcat-users>

It should be noted that IDE can start Tomcat with the settings from the workspace or from the Tomcat directory. We chose the second option, so we put the server parameters in IDE – Server Location-> Use Tomcat installation.

Deploy on the server

Let’s run our Tomcat server from IDE, specify the Maven life cycle for the module: module-> Run As-> Maven build, add tomcat:redeploy into the Goals line and run it. Thus the model has deployed on localhost:8080/project/. Everyboday dance and sing. In Eclipse it’s rather convenient to deploy by using the Shift+Alt+X, M keyword.

18
Sep

The Java .class file or for which Java the class is compiled

Posted by eugene as Java

It’s often necessary to find out the version of the binary .class file. For example, the web/application server is working under the fifth Java and locally we have the sixth one. Hence, in this way the problem is difficult to determine.

Having the .class file and staff jdk means, you can quickly and qualitively get information about the compiled class. In the jdk/bin folder you can find an excellent tool javap.

With its help, when you insert the following line in the command line:

javap -verbose MyClass

You will get the result:

Compiled from "MyClass.java"
public class MyClass extends java.lang.Object
SourceFile: "MyClass.java"
minor version: 0
major version: 50
Constant pool:
const #1 = class #2; // MyClass
const #2 = Asciz MyClass;
const #3 = class #4; // java/lang/Object
const #4 = Asciz java/lang/Object;

We need the following line:

major version : 50

To define the version of the class that has been compiled, you can use the table below:

major minor Java platform version
45 3 1.0
45 3 1.1
46 0 1.2
47 0 1.3
48 0 1.4
49 0 1.5
50 0 1.6