Jersey filter/Interception binding

looks like mainly 3 ways to define the filter/interceptor
1. global binding
by the @Provider annotation and implement the Client/ContaineRequest/ResponseFilter
this would apply to all request/response

2.named binding
by create new annotation of @NameBinding, which annotate the customFilter and resources together to bind them

3. Dynamic binding
by implement the dynamicFeature, which then would check the resource and register/provide corresponding filters for that resource

ref: https://dzone.com/articles/binding-strategies-for-jax-rs-filters-andintercept

Another angle of view: imperative/procedural vs functional/declarative

quote
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/functional-programming-vs-imperative-programming
 

Transitioning for OOP Developers
In traditional object-oriented programming (OOP), most developers are accustomed to programming in the imperative/procedural style. To switch to developing in a pure functional style, they have to make a transition in their thinking and their approach to development.
To solve problems, OOP developers design class hierarchies, focus on proper encapsulation, and think in terms of class contracts. The behavior and state of object types are paramount, and language features, such as classes, interfaces, inheritance, and polymorphism, are provided to address these concerns.
In contrast, functional programming approaches computational problems as an exercise in the evaluation of pure functional transformations of data collections. Functional programming avoids state and mutable data, and instead emphasizes the application of functions.

The RPC didn’t feel so long ago

It didn’t seems like a century old ago, when the stub and skeleton was widely used, and WSDL2Java & Java2WSDL was pretty convenient and “cool”:

  1. A Java program executes a method on a stub (local object representing the remote service)
  2. The stub executes routines in the JAX-RPC Runtime System (RS)
  3. The RS converts the remote method invocation into a SOAP message
  4. The RS transmits the message as an HTTP request

it’s now an Optional in Java EE7, but good to see it’s still there.

https://java.net/projects/jax-rpc/

JVM 7

http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/memleaks.html

3.1.1 Detail Message: Java heap space

The detail message Java heap space indicates that an object could not be allocated in the Java heap. This error does not necessarily imply a memory leak. The problem can be as simple as a configuration issue, where the specified heap size (or the default size, if not specified) is insufficient for the application.

In other cases, and in particular for a long-lived application, the message might be an indication that the application is unintentionally holding references to objects, and this prevents the objects from being garbage collected. This is the Java language equivalent of a memory leak. Note that APIs that are called by an application could also be unintentionally holding object references.

3.1.2 Detail Message: PermGen space

The detail message PermGen space indicates that the permanent generation is full. The permanent generation is the area of the heap where class and method objects are stored. If an application loads a very large number of classes, then the size of the permanent generation might need to be increased using the -XX:MaxPermSize option.

Interned java.lang.String objects are no longer stored in the permanent generation. The java.lang.String class maintains a pool of strings. When the intern method is invoked, the method checks the pool to see if an equal string is already in the pool. If there is, then the intern method returns it; otherwise it adds the string to the pool. In more precise terms, the java.lang.String.intern method is used to obtain the canonical representation of the string; the result is a reference to the same class instance that would be returned if that string appeared as a literal.

When this kind of error occurs, the text ClassLoader.defineClass might appear near the top of the stack trace that is printed.

very good article on Spring @Transaction, especially the read-only flag

http://www.ibm.com/developerworks/java/library/j-ts1.html


The most common reason for using transactions in an application is to maintain a high degree of data integrity and consistency. If you're unconcerned about the quality of your data, you needn't concern yourself with transactions. After all, transaction support in the Java platform can kill performance, introduce locking issues and database concurrency problems, and add complexity to your application.
About this series
Transactions improve the quality, integrity, and consistency of your data and make your applications more robust. Implementation of successful transaction processing in Java applications is not a trivial exercise, and it's about design as much as about coding. In this new series, Mark Richards is your guide to designing an effective transaction strategy for use cases ranging from simple applications to high-performance transaction processing.
But developers who don't concern themselves with transactions do so at their own peril. Almost all business-related applications require a high degree of data quality. The financial investment industry alone wastes tens of billions of dollars on failed trades, with bad data being the second-leading cause (see Resources). Although lack of transaction support is only one factor leading to bad data (albeit a major one), a safe inference is that billions of dollars are wasted in the financial investment industry alone as a result of nonexistent or poor transaction support.
Ignorance about transaction support is another source of problems. All too often I hear claims like "we don't need transaction support in our applications because they never fail." Right. I have witnessed some applications that in fact rarely or never throw exceptions. These applications bank on well-written code, well-written validation routines, and full testing and code coverage support to avoid the performance costs and complexity associated with transaction processing. The problem with this type of thinking is that it takes into account only one characteristic of transaction support: atomicity. Atomicity ensures that all updates are treated as a single unit and are either all committed or all rolled back. But rolling back or coordinating updates isn't the only aspect of transaction support. Another aspect, isolation, ensures that one unit of work is isolated from other units of work. Without proper transaction isolation, other units of work can access updates made by an ongoing unit of work, even though that unit of work is incomplete. As a result, business decisions might be made on the basis of partial data, which could cause failed trades or other negative (or costly) outcomes.
Better late than never
I started to appreciate the problems with transaction processing in early 2000 when, while working at a client site, I noticed a line item on the project plan right above the system testing task. It read implement transaction support. Sure, easy enough to add transaction support to a major application when it is almost ready for system testing, right? Unfortunately, this approach is all too common. At least this project, unlike most, was implementing transaction support, albeit at the end of the development cycle.
So, given the high cost and negative impact of bad data and the basic knowledge that transactions are important (and necessary), you need to use transactions and learn how to deal with the issues that can arise. You press on and add transaction support to your applications. And that's where the problem often begins. Transactions don't always seem to work as promised in the Java platform. This article is an exploration of the reasons why. With the help of code examples, I'll introduce some of the common transaction pitfalls I continually see and experience in the field, in most cases in production environments.
Although most of this article's code examples use the Spring Framework (version 2.5), the transaction concepts are the same as for the EJB 3.0 specification. In most cases, it is simply a matter of replacing the Spring Framework @Transactional annotation with the @TransactionAttribute annotation found in the EJB 3.0 specification. Where the two frameworks differ in concept and technique, I have included both Spring Framework and EJB 3.0 source code examples.
Local transaction pitfalls
A good place to start is with the easiest scenario: the use of local transactions, also commonly referred to as database transactions. In the early days of database persistence (for example, JDBC), we commonly delegated transaction processing to the database. After all, isn't that what the database is supposed to do? Local transactions work fine for logical units of work (LUW) that perform a single insert, update, or delete statement. For example, consider the simple JDBC code in Listing 1, which performs an insert of a stock-trade order to a TRADE table:

Listing 1. Simple database insert using JDBC

@Stateless
public class TradingServiceImpl implements TradingService {
   @Resource SessionContext ctx;
   @Resource(mappedName="java:jdbc/tradingDS") DataSource ds;

   public long insertTrade(TradeData trade) throws Exception {
      Connection dbConnection = ds.getConnection();
      try {
         Statement sql = dbConnection.createStatement();
         String stmt =
            "INSERT INTO TRADE (ACCT_ID, SIDE, SYMBOL, SHARES, PRICE, STATE)"
          + "VALUES ("
          + trade.getAcct() + "','"
          + trade.getAction() + "','"
          + trade.getSymbol() + "',"
          + trade.getShares() + ","
          + trade.getPrice() + ",'"
          + trade.getState() + "')";
         sql.executeUpdate(stmt, Statement.RETURN_GENERATED_KEYS);
         ResultSet rs = sql.getGeneratedKeys();
         if (rs.next()) {
            return rs.getBigDecimal(1).longValue();
         } else {
            throw new Exception("Trade Order Insert Failed");
         }
      } finally {
         if (dbConnection != null) dbConnection.close();
      }
   }
}

The JDBC code in Listing 1 includes no transaction logic, yet it persists the trade order in the TRADE table in the database. In this case, the database handles the transaction logic.
This is all well and good for a single database maintenance action in the LUW. But suppose you need to update the account balance at the same time you insert the trade order into the database, as shown in Listing 2:

Listing 2. Performing multiple table updates in the same method
 
public TradeData placeTrade(TradeData trade) throws Exception {
   try {
      insertTrade(trade);
      updateAcct(trade);
      return trade;
   } catch (Exception up) {
      //log the error
      throw up;
   }
} 

In this case, the insertTrade() and updateAcct() methods use standard JDBC code without transactions. Once the insertTrade() method ends, the database has persisted (and committed) the trade order. If the updateAcct() method should fail for any reason, the trade order would remain in the TRADE table at the end of the placeTrade() method, resulting in inconsistent data in the database. If the placeTrade() method had used transactions, both of these activities would have been included in a single LUW, and the trade order would have been rolled back if the account update failed.
With the popularity of Java persistence frameworks like Hibernate, TopLink, and the Java Persistence API (JPA) on the rise, we rarely write straight JDBC code anymore. More commonly, we use the newer object-relational mapping (ORM) frameworks to make our lives easier by replacing all of that nasty JDBC code with a few simple method calls. For example, to insert the trade order from the JDBC code example in Listing 1, using the Spring Framework with JPA, you'd map the TradeData object to the TRADE table and replace all of that JDBC code with the JPA code in Listing 3:

Listing 3. Simple insert using JPA
 
public class TradingServiceImpl {
    @PersistenceContext(unitName="trading") EntityManager em;

    public long insertTrade(TradeData trade) throws Exception {
       em.persist(trade);
       return trade.getTradeId();
    }
}

Notice that Listing 3 invokes the persist() method on the EntityManager to insert the trade order. Simple, right? Not really. This code will not insert the trade order into the TRADE table as expected, nor will it throw an exception. It will simply return a value of 0 as the key to the trade order without changing the database. This is one of the first major pitfalls of transaction processing: ORM-based frameworks require a transaction in order to trigger the synchronization between the object cache and the database. It is through a transaction commit that the SQL code is generated and the database affected by the desired action (that is, insert, update, delete). Without a transaction there is no trigger for the ORM to generate SQL code and persist the changes, so the method simply ends — no exceptions, no updates. If you are using an ORM-based framework, you must use transactions. You can no longer rely on the database to manage the connections and commit the work.
These simple examples should make it clear that transactions are necessary in order to maintain data integrity and consistency. But they only begin to scratch the surface of the complexity and pitfalls associated with implementing transactions in the Java platform.
Back to top
Spring Framework @Transactional annotation pitfalls
So, you test the code in Listing 3 and discover that the persist() method didn't work without a transaction. As a result, you view a few links from a simple Internet search and find that with the Spring Framework, you need to use the @Transactional annotation. So you add the annotation to your code as shown in Listing 4:

Listing 4. Using the @Transactional annotation

public class TradingServiceImpl {
   @PersistenceContext(unitName="trading") EntityManager em;

   @Transactional
   public long insertTrade(TradeData trade) throws Exception {
      em.persist(trade);
      return trade.getTradeId();
   }
}

You retest your code, and you find it still doesn't work. The problem is that you must tell the Spring Framework that you are using annotations for your transaction management. Unless you are doing full unit testing, this pitfall is sometimes hard to discover. It usually leads to developers simply adding the transaction logic in the Spring configuration files rather than through annotations.
When using the @Transactional annotation in Spring, you must add the following line to your Spring configuration file:


The transaction-manager property holds a reference to the transaction manager bean defined in the Spring configuration file. This code tells Spring to use the @Transaction annotation when applying the transaction interceptor. Without it, the @Transactional annotation is ignored, resulting in no transaction being used in your code.
Getting the basic @Transactional annotation to work in the code in Listing 4 is only the beginning. Notice that Listing 4 uses the @Transactional annotation without specifying any additional annotation parameters. I've found that many developers use the @Transactional annotation without taking the time to understand fully what it does. For example, when using the @Transactional annotation by itself as I do in Listing 4, what is the transaction propagation mode set to? What is the read-only flag set to? What is the transaction isolation level set to? More important, when should the transaction roll back the work? Understanding how this annotation is used is important to ensuring that you have the proper level of transaction support in your application. To answer the questions I've just asked: when using the @Transactional annotation by itself without any parameters, the propagation mode is set to REQUIRED, the read-only flag is set to false, the transaction isolation level is set to the database default (usually READ_COMMITTED), and the transaction will not roll back on a checked exception.
Back to top
@Transactional read-only flag pitfalls
A common pitfall I frequently come across in my travels is the improper use of the read-only flag on the Spring @Transactional annotation. Here is a quick quiz for you: When using standard JDBC code for Java persistence, what does the @Transactional annotation in Listing 5 do when the read-only flag is set to true and the propagation mode set to SUPPORTS?

Listing 5. Using read-only with SUPPORTS propagation mode — JDBC

@Transactional(readOnly = true, propagation=Propagation.SUPPORTS)
public long insertTrade(TradeData trade) throws Exception {
   //JDBC Code...
}

When the insertTrade() method in Listing 5 executes, does it:
Throw a read-only connection exception
Correctly insert the trade order and commit the data
Do nothing because the propagation level is set to SUPPORTS
Give up? The correct answer is B. The trade order is correctly inserted into the database, even though the read-only flag is set to true and the transaction propagation set to SUPPORTS. But how can that be? No transaction is started because of the SUPPORTS propagation mode, so the method effectively uses a local (database) transaction. The read-only flag is applied only if a transaction is started. In this case, no transaction was started, so the read-only flag is ignored.
Okay, so if that is the case, what does the @Transactional annotation do in Listing 6 when the read-only flag is set and the propagation mode is set to REQUIRED?

Listing 6. Using read-only with REQUIRED propagation mode — JDBC
 
@Transactional(readOnly = true, propagation=Propagation.REQUIRED)
public long insertTrade(TradeData trade) throws Exception {
   //JDBC code...
}

When executed, does the insertTrade() method in Listing 6:
Throw a read-only connection exception
Correctly insert the trade order and commit the data
Do nothing because the read-only flag is set to true
This one should be easy to answer given the prior explanation. The correct answer here is A. An exception will be thrown, indicating that you are trying to perform an update operation on a read-only connection. Because a transaction is started (REQUIRED), the connection is set to read-only. Sure enough, when you try to execute the SQL statement, you get an exception telling you that the connection is a read-only connection.
The odd thing about the read-only flag is that you need to start a transaction in order to use it. Why would you need a transaction if you are only reading data? The answer is that you don't. Starting a transaction to perform a read-only operation adds to the overhead of the processing thread and can cause shared read locks on the database (depending on what type of database you are using and what the isolation level is set to). The bottom line is that the read-only flag is somewhat meaningless when you use it for JDBC-based Java persistence and causes additional overhead when an unnecessary transaction is started.
What about when you use an ORM-based framework? In keeping with the quiz format, can you guess what the result of the @Transactional annotation in Listing 7 would be if the insertTrade() method were invoked using JPA with Hibernate?

Listing 7. Using read-only with REQUIRED propagation mode — JPA

@Transactional(readOnly = true, propagation=Propagation.REQUIRED)
public long insertTrade(TradeData trade) throws Exception {
   em.persist(trade);
   return trade.getTradeId();
}

Does the insertTrade() method in Listing 7:
Throw a read-only connection exception
Correctly insert the trade order and commit the data
Do nothing because the readOnly flag is set to true
The answer to this question is a bit more tricky. In some cases the answer is C, but in most cases (particularly when using JPA) the answer is B. The trade order is correctly inserted into the database without error. Wait a minute — the preceding example shows that a read-only connection exception would be thrown when the REQUIRED propagation mode is used. That is true when you use JDBC. However, when you use an ORM-based framework, the read-only flag works a bit differently. When you are generating a key on an insert, the ORM framework will go to the database to obtain the key and subsequently perform the insert. For some vendors, such as Hibernate, the flush mode will be set to MANUAL, and no insert will occur for inserts with non-generated keys. The same holds true for updates. However, other vendors, like TopLink, will always perform inserts and updates when the read-only flag is set to true. Although this is both vendor and version specific, the point here is that you cannot be guaranteed that the insert or update will not occur when the read-only flag is set, particularly when using JPA as it is vendor-agnostic.
Which brings me to another major pitfall I frequently encounter. Given all you've read so far, what do you suppose the code in Listing 8 would do if you only set the read-only flag on the @Transactional annotation?

Listing 8. Using read-only — JPA

@Transactional(readOnly = true)
public TradeData getTrade(long tradeId) throws Exception {
   return em.find(TradeData.class, tradeId);
}

Does the getTrade() method in Listing 8:
Start a transaction, get the trade order, then commit the transaction
Get the trade order without starting a transaction
Never say never
At certain times you may want to start a transaction for a database read operation for example, when isolating your read operations for consistency or setting a specific transaction isolation level for the read operation. However, these situations are rare in business applications, and unless you're faced with one, you should avoid starting a transaction for database read operations, as they are unnecessary and can lead to database deadlocks, poor performance, and poor throughput.
The correct answer here is A. A transaction is started and committed. Don't forget: the default propagation mode for the @Transactional annotation is REQUIRED. This means that a transaction is started when in fact one is not required (see Never say never). . Depending on the database you are using, this can cause unnecessary shared locks, resulting in possible deadlock situations in the database. In addition, unnecessary processing time and resources are being consumed starting and stopping the transaction. The bottom line is that when you use an ORM-based framework, the read-only flag is quite useless and in most cases is ignored. But if you still insist on using it, always set the propagation mode to SUPPORTS, as shown in Listing 9, so no transaction is started:

Listing 9. Using read-only and SUPPORTS propagation mode for select operation

@Transactional(readOnly = true, propagation=Propagation.SUPPORTS)
public TradeData getTrade(long tradeId) throws Exception {
   return em.find(TradeData.class, tradeId);
}

Better yet, just avoid using the @Transactional annotation altogether when doing read operations, as shown in Listing 10:

Listing 10. Removing the @Transactional annotation for select operations

public TradeData getTrade(long tradeId) throws Exception {
   return em.find(TradeData.class, tradeId);
} 

Back to top
REQUIRES_NEW transaction attribute pitfalls
Whether you're using the Spring Framework or EJB, use of the REQUIRES_NEW transaction attribute can have negative results and lead to corrupt and inconsistent data. The REQUIRES_NEW transaction attribute always starts a new transaction when the method is started, whether or not an existing transaction is present. Many developers use the REQUIRES_NEW attribute incorrectly, assuming it is the correct way to make sure that a transaction is started. Consider the two methods in Listing 11:

Listing 11. Using the REQUIRES_NEW transaction attribute

@Transactional(propagation=Propagation.REQUIRES_NEW)
public long insertTrade(TradeData trade) throws Exception {...}

@Transactional(propagation=Propagation.REQUIRES_NEW)
public void updateAcct(TradeData trade) throws Exception {...}

Notice in Listing 11 that both of these methods are public, implying that they can be invoked independently from each other. Problems occur with the REQUIRES_NEW attribute when methods using it are invoked within the same logical unit of work via inter-service communication or through orchestration. For example, suppose in Listing 11 that you can invoke the updateAcct() method independently of any other method in some use cases, but there's also the case where the updateAcct() method is also invoked in the insertTrade() method. Now, if an exception occurs after the updateAcct() method call, the trade order would be rolled back, but the account updates would be committed to the database, as shown in Listing 12:

Listing 12. Multiple updates using the REQUIRES_NEW transaction attribute

@Transactional(propagation=Propagation.REQUIRES_NEW)
public long insertTrade(TradeData trade) throws Exception {
   em.persist(trade);
   updateAcct(trade);
   //exception occurs here! Trade rolled back but account update is not!
   ...
}

This happens because a new transaction is started in the updateAcct() method, so that transaction commits once the updateAcct() method ends. When you use the REQUIRES_NEW transaction attribute, if an existing transaction context is present, the current transaction is suspended and a new transaction started. Once that method ends, the new transaction commits and the original transaction resumes.
Because of this behavior, the REQUIRES_NEW transaction attribute should be used only if the database action in the method being invoked needs to be saved to the database regardless of the outcome of the overlaying transaction. For example, suppose that every stock trade that was attempted had to be recorded in an audit database. This information needs to be persisted whether or not the trade failed because of validation errors, insufficient funds, or some other reason. If you did not use the REQUIRES_NEW attribute on the audit method, the audit record would be rolled back along with the attempted trade. Using the REQUIRES_NEW attribute guarantees that the audit data is saved regardless of the initial transaction's outcome. The main point here is always to use either the MANDATORY or REQUIRED attribute instead of REQUIRES_NEW unless you have a reason to use it for reasons similar those to the audit example.
Back to top
Transaction rollback pitfalls
I've saved the most common transaction pitfall for last. Unfortunately, I see this one in production code more times than not. I'll start with the Spring Framework and then move on to EJB 3.
So far, the code you have been looking at looks something like Listing 13:

Listing 13. No rollback support

@Transactional(propagation=Propagation.REQUIRED)
public TradeData placeTrade(TradeData trade) throws Exception {
   try {
      insertTrade(trade);
      updateAcct(trade);
      return trade;
   } catch (Exception up) {
      //log the error
      throw up;
   }
}

Suppose the account does not have enough funds to purchase the stock in question or is not set up to purchase or sell stock yet and throws a checked exception (for example, FundsNotAvailableException). Does the trade order get persisted in the database or is the entire logical unit of work rolled back? The answer, surprisingly, is that upon a checked exception (either in the Spring Framework or EJB), the transaction commits any work that has not yet been committed. Using Listing 13, this means that if a checked exception occurs during the updateAcct() method, the trade order is persisted, but the account isn't updated to reflect the trade.
This is perhaps the primary data-integrity and consistency issue when transactions are used. Run-time exceptions (that is, unchecked exceptions) automatically force the entire logical unit of work to roll back, but checked exceptions do not. Therefore, the code in Listing 13 is useless from a transaction standpoint; although it appears that it uses transactions to maintain atomicity and consistency, in fact it does not.
Although this sort of behavior may seem strange, transactions behave this way for some good reasons. First of all, not all checked exceptions are bad; they might be used for event notification or to redirect processing based on certain conditions. But more to the point, the application code may be able to take corrective action on some types of checked exceptions, thereby allowing the transaction to complete. For example, consider the scenario in which you are writing the code for an online book retailer. To complete the book order, you need to send an e-mail confirmation as part of the order process. If the e-mail server is down, you would send some sort of SMTP checked exception indicating that the message cannot be sent. If checked exceptions caused an automatic rollback, the entire book order would be rolled back just because the e-mail server was down. By not automatically rolling back on checked exceptions, you can catch that exception and perform some sort of corrective action (such as sending the message to a pending queue) and commit the rest of the order.
When you use the Declarative transaction model (described in more detail in Part 2 of this series), you must specify how the container or framework should handle checked exceptions. In the Spring Framework you specify this through the rollbackFor parameter in the @Transactional annotation, as shown in Listing 14:

Listing 14. Adding transaction rollback support — Spring

@Transactional(propagation=Propagation.REQUIRED, rollbackFor=Exception.class)
public TradeData placeTrade(TradeData trade) throws Exception {
   try {
      insertTrade(trade);
      updateAcct(trade);
      return trade;
   } catch (Exception up) {
      //log the error
      throw up;
   }
}

Notice the use of the rollbackFor parameter in the @Transactional annotation. This parameter accepts either a single exception class or an array of exception classes, or you can use the rollbackForClassName parameter to specify the names of the exceptions as Java String types. You can also use the negative version of this property (noRollbackFor) to specify that all exceptions should force a rollback except certain ones. Typically most developers specify Exception.class as the value, indicating that all exceptions in this method should force a rollback.
EJBs work a little bit differently from the Spring Framework with regard to rolling back a transaction. The @TransactionAttribute annotation found in the EJB 3.0 specification does not include directives to specify the rollback behavior. Rather, you must use the SessionContext.setRollbackOnly() method to mark the transaction for rollback, as illustrated in Listing 15:

Listing 15. Adding transaction rollback support — EJB

@TransactionAttribute(TransactionAttributeType.REQUIRED)
public TradeData placeTrade(TradeData trade) throws Exception {
   try {
      insertTrade(trade);
      updateAcct(trade);
      return trade;
   } catch (Exception up) {
      //log the error
      sessionCtx.setRollbackOnly();
      throw up;
   }
} 

Once the setRollbackOnly() method is invoked, you cannot change your mind; the only possible outcome is to roll back the transaction upon completion of the method that started the transaction. The transaction strategies described in future articles in the series will provide guidance on when and where to use the rollback directives and on when to use the REQUIRED vs. MANDATORY transaction attributes.
Back to top
Conclusion
The code used to implement transactions in the Java platform is not overly complex; however, how you use and configure it can get somewhat complex. Many pitfalls are associated with implementing transaction support in the Java platform (including some less common ones that I haven't discussed here). The biggest issue with most of them is that no compiler warnings or run-time errors tell you that the transaction implementation is incorrect. Furthermore, contrary to the assumption reflected in the "Better late than never" anecdote at the start of this article, implementing transaction support is not only a coding exercise. A significant amount of design effort goes into developing an overall transaction strategy. The rest of the Transaction strategies series will help guide you in terms of how to design an effective transaction strategy for use cases ranging from simple applications to high-performance transaction processing.

J2EE ear deployment

for development environment, i am using exploded for deployment.

yesterday is my last day in the previous project, to keep it as the “safe” flow, i am trying to compress those exploded back to ear. the tool i m using is winrar.

however, its not working. throwing some exception saying, zip property issue.

change the format from rar format to zip format, then it works.

not bad post on Document/Literal Wrapper web service

from http://labs.icodeon.com/projects/spd/html/index.html

<div style="overflow: scroll;height: 300px">
This document was created using the &gt;e-novative&gt; DocBook Environment (eDE)
Building Document Literal Wrapped Web Services with Java

Copyright © 2007 Icodeon Ltd

July 2007

Table of Contents

1. Introduction

    1.1. Project
    1.2. Web Services by Design
    1.3. The Document Literal Wrapped Design Pattern
    1.4. Project Context

2. Conceptual Overview

    2.1. Document Based SOAP Web Services
    2.2. Wrapped Documents

3. Web Service Design in XML

    3.1. WSDL Design
    3.2. XSD Design

4. Code Generation

    4.1. Code Generation from WSDL
    4.2. Code Generation from XSD

5. Web Service Deployment

    5.1. Deployment Descriptors
    5.2. Deployment

6. Web Service Testing

    6.1. Self-Description Testing
    6.2. Functional Testing
    6.3. Load Testing
    6.4. Conformance Testing

7. Web Service Clients

    7.1. Clients
    7.2. Generic AJAX Base Client Class
    7.3. Specialized AJAX Client SubClass

8. Conclusion

    8.1. Project Outcomes

Chapter 1. Introduction

Table of Contents

1.1. Project
1.2. Web Services by Design
1.3. The Document Literal Wrapped Design Pattern
1.4. Project Context

1.1. Project

This project describes an approach to designing, building and testing a particular style of web service: the document literal wrapped style. Web services built using this approach provide the benefit of complete self-description, standards conformance, and robustness in the face of changing requirements.

This work has been funded under the JISC e-Learning Program as the "Saving Private Data" project
1.2. Web Services by Design

This project describes an approach to designing, building and testing a SOAP style web service, using the document literal design pattern. The web service is built using code produced using code generation techniques. The approach is to use XML design documents as the foundation and to generate code as an artefact of the design. The web service code is generated from web services definition language (WSDL) design documents and XML schema (XSD) documents:
WSDL to code

Web services code generated from WSDL design

This approach contrasts with the common practice of generating WSDL documents from web service code that has been written by hand:
Code to WSDL

WSDL document generated from web services code

By reversing this common practice of generating WSDL from a hand written code base, and instead using XML documents as a foundational design document, we gain more control over the web service design. This extra control of design then enables the benefits of the document literal wrapped pattern to be realised.
1.3. The Document Literal Wrapped Design Pattern

The document literal wrapped web service produced as a result of this attention to design affords the following benefits:

    *

      Web service code is generated from XML design documents, ensuring design led development.
    *

      With XML documents as the foundational design documents, web services can be generated in different programming languages. The approach is platform neutral. Icodeon have used the techniques described in this project with both Java and C# programming languages.
    *

      Web service operations and the service messages are completely self-describing. This ensures that the web service can be discovered and automatically conmsumed by tools and clients.
    *

      The web service is standards compliant against the Web Services Interoperability profile. This means that the web service can be integrated with other web services technologies, within service oriented architectures and web services orchestrations.
    *

      The web service is robust against future requirements changes. Changes can be made to the messages that the service sends or receives without breaking exisiting clients and consumers.

1.4. Project Context

The project describes an approach to web service design for a service that sends and recieves valid XML documents. The project uses a large and complex schema called the Content Object Communication Datamodel schema published by the IEEE under the IEEE 1484.11.3 specification.

The IEEE schema is an XML binding for the Computer Managed Instruction Datamodel used in the widely deployed SCORM profile of e-Learning specifications.

The project, and this narrative, have the following structure:

   1.

      First, an overview is given of the conceptual overview around SOAP messaging with valid XML documents. See Chapter 2.
   2.

      Second, an XML description for the operations of the web service is designed. The XML document will be used later for code generation of the web service, using a WSDL-to-Java code generation tool. See Chapter 3.
   3.

      Third, schema descriptions for the documents that the web service will send and recieve are defined in XML. The XML schemas will be used later for code generation of the data model used by the web service, using an XSD-to-Java code generation tool. See Chapter 4.
   4.

      Fourth, the code is generated from the XML design documents. The generated Java code from different design documents is combined, edited and compiled. See Chapter 5.
   5.

      Fifth, the compiled code is packaged and deployed using the Apache AXIS web services framework. See Chapter 6.
   6.

      Then, the web service is tested using functional testing, load/stress testing and conformance testing. Testing is carried out by using the Apache JMeter tool to send and receive documents from the web service. See Chapter 7.
   7.

      Finaly, a generic AJAX client for document literal wrapped style is built as a Java Script class using object oriented techniques. The generic client class is sub-classed to a specific client for the particular web service. See Chapter 8.

Chapter 2. Conceptual Overview

Table of Contents

2.1. Document Based SOAP Web Services
2.2. Wrapped Documents

2.1. Document Based SOAP Web Services

SOAP based web services receive messages from clients as SOAP envelopes.
SOAP messaging

Messages sent from client to web service as a SOAP envelope

These envelopes contain message headers and a message body:
SOAP Envelope

SOAP envelope with header and body

In the case of document based services, we can be much more specific about the contents of the message body. The body of the message will hold a valid XML document whose schema is exposed in the WSDL description of the web service. We will sometimes refer to this XML document as the document payload of the envelope: it is this document inside the envelope that provides the value of the message:
SOAP Envelope with document

Valid XML document as the payload of a SOAP message

With the schema for the document embedded in the WSDL description of the web service, introspection tools can query the web service to determine the structure and types for the information carried in the document. The web service is self-describing for the documents it processes.

So already we have an intelligence heavy message: information can be added to the headers, and the document in the message body is described by a schema available by inspecting the web service WSDL.
2.2. Wrapped Documents

However, the message remains lacking in an important feature: the message does not carry with it any instructions about what the web service is to do with the information contained in the document. The message does not contain details of what operation the web service is to invoke on the document. The document literal wrapped design patterm resolves this issue by adding a wrapper around the document that contains the name of the operation to be trageted on the web service.
Wrapped document

Document payload wrapped with the name of the target web service operation

The message we have built has information rich headers, a self-describing, valid, intelligence heavy payload, and carries with it an operation name to tell the web service what to do with the payload. In Icodeon's experience however, this level of information is not yet sufficient. The document remains anonymous: we know what the document is, and how it should be processed, but who is the document from?

To resolve this final issue, Icodeon have found it necessary to nest in-between the wrapper and document a inner-wrapper to carry contextual informaction such as a user ID or a session ID that has no other logical home elsewhere in the message:
Wrapped document

Document payload wrapped with the contextual information, such as a user ID

This completes the conceptual design for the messaging in a document literal wrapped web service. This design translates technically to the following elements:

    *

      A header with descriptive information about the message, such as the content type.
    *

      A root wrapper element with the name of the target operation. We will be persisting the document so we will choose a meaningful verb name for the operation: "Commit".
    *

      A nested inner wrapper element to hold contextual information such as user ID or session ID. We can define this element however we like, so lets use something that refers to the context of the message: "ContextDataModel".
    *

      The document payload itself. This is the Content Object Communication Datamodel document, which has a root element called "cocd".

The four elements make up the content of the SOAP message element we will be sending to the "Commit" operation of the web service:

          
          Content-Type: text/xml; charset=utf-8
          Content-Length: length
          SOAPAction: "http://www.icodeon.com/services/cmi/Commit"
          
          
          
          
          
          
          
          ...
          
          
          
          
          
          
        

Chapter 3. Web Service Design in XML

Table of Contents

3.1. WSDL Design
3.2. XSD Design

3.1. WSDL Design

We are going to use the document/literal wrapped design pattern for the WSDL. A key point of this design is that a new XML element is "wrapped" around the document payload, and that this new element has the same name as the web service operation.

So to send a document to the "Commit" operation of the web service, we first need to wrap the document in an new XML element called "Commit" and define this element in the WSDL:

          
            
                
                ...
                
            
            
        

Now that we have defined the new XML element to wrap the document payload, we need to use the element to define the input to the "Commit" operation. The output of the operation is equivalent to a void return and this is defined also in the WSDL:

          
              
                
               
          
        

With the wrapping element defined, and used as a message part, we are ready to define a web service operation with the same name as the wrapping element:

          
              
                
                  
                
          
        

This abstract descritpion of an operation needs an implementation, so we add this to our design within the binding element of the WSDL:

        
            
            
            
              
              
                  
              
              
                  
              
            
          
      

3.2. XSD Design

We have completed the design of the WSDL, but one practical issue remains. A constraint of the document/literal wrapped design pattern is that the wrapping element's complex type may have no attributes. This is slightly awkward from a practical point of view as we often need to send some contextual information with the request (for example, a userID or sessionID) as attribute values along with the document payload itself (i.e the document that is being sent to the service is associated with a user and session).

To include this contextual information in the request, as well as the document payload, Icodeon have found that a good technique is to nest another element below the wrapping element but above the document wrapper - a kind of inner wrapper. In this example, we have called the inner wrapper "ContextDataModel" as it will hold information about the context. This element is then nested below the wrapping element in the WSDL:

        
            
                
                    
                        
                    
                
            
          
      

Now that we have a element that is free of any constraints from the document/literal wrapped design pattern, we can add whatever contextual information we like to accompany the main document payload. In our example, we will just define a userID and sessionID as attributes:

        
          
            
            
          
          
      

The element that we defined above is not only being used to carry around a few useful identifiers. It is also a parent to the document payload that is the principal meesage that we are sending to the web service operation. So to complete the newly defined element, we need to nest as a child element the root element of the document payload. In our case, the root element of the IEEE 1484.11.3 schema for the Content Object Communication Datamodel is called cocd:

        
          
            
              
            
            
            
          
          
      

We then reference the IEEE 1484.11.3 schema for the Content Object Communication Datamodel and it's root element and namespace with an import statement:

        
          
          
      

The final touch is to separate the schema for this extra contextual information into it's own XSD file and reference the file from within the WSDL with an inclusion statement:

        
            
          
      

This may seem like a lot of hoops to jump through, but the end result is a very powerful design pattern. This is what all the effort we have put into WSDL design now buys us:

    *

      Everything that appears in the SOAP message's body is defined by a schema. The document payload needs no out of band agreement to describe the documents that may be sent to and received from the web service.
    *

      In addition to a defined schema for the document, we have defined a schema to carry small bits of contextual information, such as user identifiers.
    *

      The service WSDL is robust. If we need to add new elements to any of the schemas in the future, we can simply add a new namespaced element. Existing implementations of service consumers will not break, but new implementations can take advantage of additions to the service.
    *

      Not only does the WSDL provide a description of the documents that can be sent to it and recived from it, the WSDL describes the available operations. So we have a fully self-describing service interface that can dynamically tell clients and tools what operations are availale, what documents can be sent, and what documents can be recieved.
    *

      We have separated out WSDL definitions and document schema definitions into their own files for easier maintainability. The files are referenced by inclusion and import statements.,
    *

      Last, but not least, we have created a WSDL design that is WS-I compliant. The wrapped pattern meets the WS-I restriction that the SOAP message's body has only one child. In our case, this single child is the new XML element used as the wrapper. 

Now that the WSDL design is complete, we are in a position to use the WSDL to drive code generation of the method stubs for each of the web service operations.
Chapter 4. Code Generation

Table of Contents

4.1. Code Generation from WSDL
4.2. Code Generation from XSD

4.1. Code Generation from WSDL

In the preceeding section on WSDL design, we created a WSDL that was self-describing, robust and WS-I compliant. In this section we will use tools to generate Java code from the design.

The Apache AXIS project includes a WSDL to Java generator (called wsdl2java) for generating code from WSDL designs. This is great as far as it goes, but our WSDL includes not only WSDL definitions but also references to schema definitions in external XSD files. The AXIS wsdl2java tool does a great job with the WSDL definitions, but a poor job with the referenced XSD files. Writing a tool that can correctly handle highly completx schema is no trivial task and it alone represents an effort as potentially difficult as the whole of the AXIS project itself.

A solution to this issue that Icodeon have found to be helpful is to use "best of breed" tools for each task: to use wsdl2java for generating Java code for the web service operations, and a different (and superior) toolset for generaing Java code from referenced XSD files. In this example, we will use the Castor Code Generator.

So the approach is to first use Apache AXIS wsdl2java to generate Java code for web service operations and referenced XSD files, then throw away the code generated from XSD and replace it with code generated with the (superior) Castor tools. Here is the process:

   1.

      Use the Apache AXIS wsdl2java tool to generate generate Java code for web service operations and referenced XSD files. A useful way to do this is from an ANT build file task:

                    
                    
                      
                          
                      
                    
                  
                  

      A useful point to note is that the AXIS wsdl2java tool will try to "guess" package naming for the generated Java code from the namespacing in the WSDL file. This is usually NOT what we want, so we reference a "namespace mapping" properties file (called NStoPkg.properties) which tells AXIS wsdl2java how to map namespaces to package names. In our case we have defined an Icodeon namespace:

                    
                    http://www.icodeon.com/services/cmi
                  
                  

      and we map this to a package hierachy in the NStoPkg.properties file, using escape characters as needed:

                    
                    http\://www.icodeon.com/services/cmi=com.icodeon.services.cmi
                  
                  

   2.

      The Apache AXIS wsdl2java tool generates the Java code for the web service from the WSDL file and document code from referenced XSD files. By looking at the generated code, we can soon see that wsdl2java is not doing a very good job with the referenced XSD files, and we are getting some strange naming conventions as an artefact of code generation. For example, a generated file called:

                    
                    CmiServiceSoapBindingStub.java
                  
                  

      This file contains the code for serializing an element called "mode" in the the IEEE 1484.11.3 schema for the Content Object Communication Datamodel. But the code generation has incorrectly added an angle bracket to the element name:

                    
                    new javax.xml.namespace.QName("http://ltsc.ieee.org/xsd/1484_11_3", "&gt;mode");
                  
                  

      So we need to recognise that Apache AXIS wsdl2java tool needs to be replaced with a superior tool when it comes to code generation from XSD files. Later on we will use Castor Code Generator, but for the moment we will delete all generated code and classes that come from XSD files, including the strange code artefacts with angle brackets.

      There are two steps to the deletion. The first step is to delete the entries with the strange angle brackets. All code sections that contain the angle brackets in references to a qualified name are deleted:

                    
                  qName = new javax.xml.namespace.QName("http://ltsc.ieee.org/xsd/1484_11_3", "&gt;mode");
                  cachedSerQNames.add(qName);
                  cls = com.icodeon.services.cmi.Mode.class;
                  cachedSerClasses.add(cls);
                  cachedSerFactories.add(enumsf);
                  cachedDeserFactories.add(enumdf);
                  
                  

      The second step is to delete all files that have been generated from the XSD schemas, as we will replace these with files generated by a superior tool later on.

      This leaves us with only the code and classes representing the web service, the web service operations and the wrapper elements that have the same names as the web service operations. After deletion we are left with the files for the web service:

                    
                  CmiService.java
                  CmiServiceLocator.java
                  CmiServicePortType.java
                  CmiServiceSoapBindingImpl.java
                  CmiServiceSoapBindingStub.java
                  
                  

      and also the files for wrapper elements that have the same names as the web service operations:

                    
                  Initialize.java
                  Commit.java
                  Terminate.java
                  
                  

      We also need to keep hold of two other files that come from the code generation also that will be used for setting up the way the web service serializes XML to Java. These are known as web service deployment descriptors (.wsdd file type):

                    
                  deploy.wsdd
                  undeploy.wsdd
                  
                  

Now that the code generation for the web service from the WSDL design file is complete, we need to generate the code to represent the document payload that is sent to the web service. That is, we need to run code generation from XSD files to replace the malformed code we deleted that was emitted by the Apache AXIS wsdl2java tool.
4.2. Code Generation from XSD

In the preceeding section we looked at the IEEE 1484.11.3 Schema, a fairly large and complex schema. The Apache AXIS project WSDL to Java generator (called wsdl2java) was not able to generate code from this schema without error, so we need to use a superior tool for working with XSDs. Fortuneately such as toolset exists and is available from the Castor project at Codehaus.

The Castor project includes an XSD to Java generator for generating code from XSD designs. We will use this tool to generate the Java code and replace the incorrectly generated code from AXIS wsdl2java. Here is the process:

   1.

      Use the Castor toolset to generate Java code for the XSD files. A useful way to do this is from an ANT build file task:

                    
      	          
      	          
      	          
      	          
      	          
      	          
                  
                  

   2.

      Castor toolset generates the Java code from the XSD files. By looking at the generated code, we can soon see that the code generation from Castor is not perfect either, with some elements being mapped to Java Objects rather than Java Strings, with invalid constructors, such as:

                    
                    new java.lang.Object("4000");
                  
                  

      Luckily this can be simply corrected with and extensive search and replace for the correct, valid String constructors, such as:

                    
                    new java.lang.String("4000");
                  
                  

Now that the code generation for the web service from the WSDL design file is complete, AND the code generation from XSD is complete, we have a complete code set for the document literall wrapped web service derived from "best of breed" tools for WSDL and for XSD. The code can now be compiled packaged into a .jar file ready for deployment as a web service.
Chapter 5. Web Service Deployment

Table of Contents

5.1. Deployment Descriptors
5.2. Deployment

5.1. Deployment Descriptors

Java based web services that consume or produce documents need to know how to convert XML elements to instances of Java objects: that is, to managed the serialization/deserialization from XML to Java and back again. In the Apache AXIS web services framework, thus serialization/deserialization is managed by web service deployment descriptors, or .wsdd files.

We noted earlier that code generation from XSDs by Apache AXIS wsdl2java tool produced errors, and that the code had to be replaced by code generated from the Castor toolset. The same errors that appeared in the Java code also appear in the web service deploment descriptors (.wsdd file types) generated by Apache AXIS also. For example, in the deployment descriptor called:

        
            deploy.wsdd
        
      

angle brackets have been incorrectly added to many of the element names in the IEEE 1484.11.3 schema, such as the mode element:

        
          mode"
          type="java:com.icodeon.services.cmi.Mode"
          serializer="org.apache.axis.encoding.ser.EnumSerializerFactory"
          deserializer="org.apache.axis.encoding.ser.EnumDeserializerFactory"
          encodingStyle=""
          /&gt;
       
      

So in the same way that we had to replace code generated by Apache AXIS wsdl2java with code generated by the Castor toolset, we now have to replace the AXIS serializers/deserializers with their Castor equivalents. For example, the root element of the IEEE 1484.11.3 schema, an element called cocd, had a serializer/deserializer generated by Apache AXIS that looked like this:

        
          
       
      

This serializer/deserializer now needs to be replaced with it's Castor equivalent:

        
          
       
      

In addition to detailing the serializers/deserializers that the web service will use, the deployment descriptor includes other details, such as which web service operations are to be exposed, and the name to be used for the service. In this example, we'll use the name "CmiService" (which later on will become part of the URL of the service) and expose the three operations called "Initialize", "Commit" and "Terminate":

        
          
            ...
            
            ...
          
          /&gt;
       
      

With the deployment descriptors now modified to support the Java objects generated by the Castor toolset, we are ready to deploy the web service.
5.2. Deployment

Our document literal web service code has been generated from design files, WSDL and XSD. The generated code has been compiled and packaged into a .jar file, and deployment descriptors have been modified to support Java objects generated by the Castor toolset. We are now ready to deploy to an application server.

Apache AXIS is servlet based, and so a servlet container such as the popular Jakarta Tomcat is required. Icodeon used a recent build, version 5.5. The web service was deployed to a web application under the Tomcat servlet container. Here are the steps:

   1.

      Set up a web application in Apache Tomcat with a simple web.xml file to support Apache Axis. Icodeon used a web application name of "cmi", but any name could be chosen. Within this web application we will need to set up the Apache AXIS servlet, and mappings for this servlet to URL patterns:

                  
                  
                    AxisServlet
                    org.apache.axis.transport.http.AxisServlet
                  
                  
                    AxisServlet
                    /servlet/AxisServlet
                    
                    
                    AxisServlet
                    /services/*
                  
                
                

   2.

      Copy to the lib directory of this web application all the supporting .jar files requried for Apache AXIS and Castor. For this example, Icodeon used:

                  
                  axis.jar
                  jaxrpc.jar
                  saaj.jar
                  wsdl4j-1.5.1.jar
                  castor-1.1.1-xml.jar
                  commons-discovery-0.2.jar
                  
                

      The result is that we now have an "empty" web application within Tomcat that has all the infrastruture to support a web service, but no web service yet deployed! You can check that the AXIS administration service is running (ready for the next step) by pointing your browser to:

                  
                    http://localhost:8080/cmi/services/AdminService?wsdl
                  
                

   3.

      The next step is to deploy the .jar file of the document literal wrapped web service we have generated using the modified deployment descriptors and a helpful deployment task from an ANT build file.

      Apache AXIS comes with ANT build file "administration" tasks for deploying and undeploying web services to servlet containers such as Tomcat. Here is the "axis-admin" deployment task which contains a reference to our modified .wsdd file and a reference to the web application (called cmi) that we have set up within Tomcat:

                  
                    
                    
                    
                  
                

      When this task runs a new file is created within the web application called:

                  
                    server-config.wsdd
                  
                

      This file should contain the details of serializers/deserializers that were added to the web service deployment descriptors. It is Icodeon's experience however, that this file is not always complete after the first deployment using the axis-admin task: the first invocation of the task appears to generate the file, and once the file has been "seeded", then a second run of the deployment using the axis-admin task adds the details of serializers/deserializers.
   4.

      Finally, point your browser to the location of the web service, and if all is well, we have a live web service generated entirely from WSDL and XSD design files:

                  
                    http://localhost:8080/cmi/services/CmiService?wsdl
                  
                

Chapter 6. Web Service Testing

Table of Contents

6.1. Self-Description Testing
6.2. Functional Testing
6.3. Load Testing
6.4. Conformance Testing

6.1. Self-Description Testing

One of the benefits of the document literal wrapped design style is that the resulting web service is completely self-describing: not only are the operations of the web service described (by the WSDL file), but also the documents that the web service sends or receives are described (by the associated XSD files). There are a number of tools that can bind to the web service, use this self-describing behaviour and query the service properties: Icodeon have used the SOAP Analyser in the Oxygen XML Editor.

The screen shot below shows the SOAP Analyser in the Oxygen XML Editor querying the "Commit" operation of the web service that was deployed above. The SOAP Analyser is showing that, as expected, the "Commit" operation of the web service is expecting a document with an element called "Commit" as a single child of SOAP message's body which is parent to an element called "ContextDataModel" that will be the parent to the document payload.
SOAP Analyser in the Oxygen XML Editor

SOAP Analyser in the Oxygen XML Editor querying the "Commit" operation of the web service [full size]
6.2. Functional Testing

Now that the web service is deployed, and we have checked that tools can dynamically read the self-description of the web service, we are ready to check the functionality of the web service. Icodeon have successfully used the Apache JMeter load testing tool for this purpose. Apache JMeter has traditonally been used for load testing web applications by sending URL parameters to web application URLs, and then examine the response mark up. However, the software can also be configured to send whole documents to a document style web service, and then query the response document. So we can use this approach to test that the web service operations have the behaviour we are expecting. Here are the steps:

   1.

      To set up Apache JMeter for web service testing, a couple of extra .jar files need to be added to the classpath of JMeter. We need to add in the mail.jar and activation.jar available from Sun Microsystems to the lib directory of Apache JMeter.
   2.

      Next, we need to author aTest Plan, and the first step is to set up a group of users/clients each making many independent requests to the web service. To get JMeter to represent these many concurrent users/clients accessing the web service, we need to set up a Thread Group. We may also need tp represent each of these users/clients making many requests within a session, so we add a Loop Controller as a child to the Thread group. During the functionality testing we can simply leave the defaults set of a single user/client making a single request.
   3.

      Now that we have a single user/client set up, we need to specifiy the operation that the user/client will invoke. This is an easy step because the document literal web service we have designed is self-describing: we simply have to point JMeter at out web service WSDL and JMeter will dynamically configure itself for all of the operations exposed on the web service. Add a Web Service (SOAP) Request Sampler to as a child of the Loop Controller element, enter the URL for the WSDL in the in the WSDL URL field, and press the Configure button. JMeter queries the WSDL and then configures itself for each of the web service operations. We will test the "Commit" operation, so select that from the drop down list of "Web Methods" that JMeter has discovered from the WSDL.
   4.

      We have configured a single user/clien and the web service operation to invoke. The final step in out preparation is define the document that will be sent to the web service. At this point, we can use a tool like the SOAP Analyser in the Oxygen XML Editor to generate the document that will be sent in the invocation of the web service. Paste the document into SOAP Data field, and everything is prepared.
   5.

      To visualise, log and test the invocation and response from the web service operation, JMeter offers a number of tools. We will use a Summary Report Listener and a Save Resonses to a File Post Processor. The Summary Report will give us a table showing request and response statistics; the Post Processor will write the web service operation to a file. The final set up is shown in the screen shot below:
      JMeter Test Plan

      Apache JMeter Test Plan for the "Commit" operation of the web service [full size]
   6.

      All that remains now is to Run the Test Plan (which by default is for one user/client making a single request) and check the results. We check the results in two ways. First, we check the file from the Post Processor to ensure that we get the expected document as the response, and second we check the report from the Summary Report Listener. For functional testing, the most imporant result is the document that is returned and written to file. However, the report shows that for one user making a single request, we have a response time of about 30 ms. which will be useful bench mark for the load testing which we work on next.
      Summary Report

      Apache JMeter Summary Report for a Single Client Making a Single Web Service Invocation[full size]

6.3. Load Testing

The functional testing enabled us to send documents to a chosen web service operation and examine the response. We can use this as a foundation to build a test plan for load testing the web service. Here are the steps:

   1.

      In the Save Resonses to a File Post Processor, check to make only failed responses saved to disc: we only want to know about web service faults now that the functional testing is complete.
   2.

      Add more samplers to create a more realistic test plan. In our example, we can add an invocation for each of the web service operations: "Initialize", "Commit" and Terminate".
   3.

      Next we have to set up simulated group of concurrent users making multiple requests within a session by modifying the three parameters within the thread group: the number of users/clients, the ramp-up period and the number of requests each user will exceute within the test session. The number of users parameter is self-explanatory (we will use 100) and the number of times they make a request are self explanatory, but the ramp-up period takes some care to set up.
   4.

      Ramp-up is the time period over which all users/client join. By default, ramp-up is set to one second, so for our scenario of 100 users, we have a default join rate of 1 user/client thread every 10 ms. To refine this figure we can look at the JMeter log (located in JMeter_Home_Directory/bin) and tune the ramp up period so that the first thread does not finish before the last thread has started. This ensures that we do not have too long a ramp-up. In our example, Icodeon found that this conditon could be achieved with one new user/client thread joining every 1ms, tuning the ramp-up to a duration of 0.1 seconds:

                  
                  10:09:51 INFO  - jmeter.threads.JMeterThread: Thread Thread Group 1-1 started
                  ...
                  10:09:51 INFO  - jmeter.threads.JMeterThread: Thread Thread Group 1-100 started 
                  ...
                  10:09:52 INFO  - jmeter.threads.JMeterThread: Thread Thread Group 1-1 is done
                  
                

   5.

      Although our ramp-up is quite steep (one new user/client joining very 1ms), we need to also reflect the rate at which users/client continue to invoke service operations once they have joined. To do this we can use a Gaussian Random Timer to manage the pause that users/clients have between each web service request. We will use the JMeter defaults of a pause of 100ms and a deviation of 300ms.
   6.

      All that remains now is to Run the Test Plan and check the results. The screen shot below shows the finished test plans and the results of the load test.
      Summary Report

      Apache JMeter Summary Report for a 100 Clients Making a Web Service Invocations to Three Operations [full size]
   7.

      The summary tells us that our web service handled 300 requests at a throughput of 187 requests per second. The fastest response time was 5 ms, the slowest response was 352 ms, and the average response time was 149ms. Studies in cognitive science suggest that users users lose focus and start doing something else if the response time is greater than 10 seconds. Under our conditions of a steep ramp-up to 100 users, followed by a "user click rate" every 100+/-300ms, our web service is providing acceptable performance.

6.4. Conformance Testing

Chapter 7. Web Service Clients

Table of Contents

7.1. Clients
7.2. Generic AJAX Base Client Class
7.3. Specialized AJAX Client SubClass

7.1. Clients

A key benefit of the designing web services with the document literal wrapped design pattern is that the web service is completely self-describing. This means that all sorts of tools and consumers that have the ability to introspect the web service WSDL can automatically generate intefaces and code to interact with the web service.

Icodeon have found that the Macromedia/Adobe authoring tools for Flash have particularly good support for rapid development of clients to web services.

In this project, a client based on asynchronous Javascript and XML (AJAX) is built, rather than using the Flash tools - the approach requires no special authoring environment or browser plugins. The approach is to use object oriented (OO) techniques to build a generic client base "class" for interacting with document literal web services in general, and then "sub-class" this functionality to build a specific client for interacting with the web service built in this project.

The approach of using object oriented techniques with the JavaScript programming language has been possible for many years but recently a diverse set to tools has made this approach more viable than it has been previously. Icodeon have used this toolset to build re-usable JavaScript "class libraries" that make use of OO techniques such as inheritence, encapsulation, package namepsacing and private/public variable scope. The available tools now mean that unit testing, logging and automated generation of documentation is available for these "classes" also. also.

Altough it is not strictly true to say that JavaScript supports class-based inheritence (like Java or C#), because it supports a prototyped-based inheritence instead. However, we can use the term "class" and "sub-class" informally as we are simulating the featues of object orientation that appear in true class-based languages.
7.2. Generic AJAX Base Client Class

The first task required before building a base class and any sub-classes is to simulate the package namespacing found in languages such as Java and C#. Our classes will be part of a unqiue namespace to distinguish them from any other classes of the same class name that may be present in a browser window. We will use the common practivce of reversing a domaon name to use a unique namespac: "com.icodeon" will be used, and build our base class called "AjaxClient" class within this namespace. The fully qualified class name is then "com.icodeon.AjaxClient".

First, we set up the "package" namespace, checking that the namespace is truly unique.

        var com;
        if(!com){com = {};}
        else if(typeof com != "object")
        {throw new Error("namepsace com exists");}
        
        if(!com.icodeon){com.icodeon = {};}
        else if(typeof com.icodeon != "object")
        {throw new Error("namepsace com.icodeon exists");}
      

The we set up the "class" namespace, again checking that there are no namspace collisions:

        if(!com.icodeon.AjaxClient){com.icodeon.AjaxClient = {};}
        else if(typeof com.icodeon.AjaxClient != "object")
        {throw new Error("namepsace com.icodeon.AjaxClient exists");}
      

No that we have a unique namespace, we can declare a constructor:

        icodeon.AjaxClient = function(){
        ...
        }
      

Public methods to be inherited or overidden in sub-classes can be declared:

        this.methodName = function(){
        ...
        }
      

Public inherited properties can be declared:

        this.inheritedPropertyName = ;
      

Private properties that will not be inherited are declared within the constructor

        var privatePropertyName = ;
      

The base class has two just two important methods - other methods are just utilities. The two important methods are the methods that call the web service with a document and then process any document that is returned. We will call these methods "request" and "response", and both need to be inherited by specialist sub-classes:

          this.request = function(){
          ...
          }
          this.response = function(){
          ...
          }
      

The "response" method will be unimplemented in the base class (the base class does not know what to do with a specific response) but will be overidden with an implementation in a sub-class:

        /**
        * Public inherited method that is called after 
        * a web service operation. Sub-classes implement this.
        * 
        * @param {Object} obj_SoapEnvelope the SOAP envelope response
        * @param {Error} obj_Error application specific error or undefined
        */
        this.response = function(obj_SoapEnvelope, obj_Error){
        ...
        }
      

The "request" method takes a number of arguments necessary to call the web service operation. In particular is it passed a DOM object that represents the SOAP envelope and wrapped document:

        /**
        * Public method to invoke a web service operation.
        * 
        * @param {String} str_HttpAction the GET or POST action
        * @param {String} str_ServiceEndPoint the service endpoint URL
        * @param {Boolean} b_Async flag for asynchronous call
        * @param {String} str_SoapActionHeader the SOAP action request header
        * @param {Object} obj_SoapEnvelope the SOAP envelope request and payload
        * @param {Function} fn_CallBack the function to be called on response
        */
        this.request = function(
          str_HttpAction,
          str_ServiceEndPoint,
          b_Async,
          str_SoapActionHeader,
          obj_SoapEnvelope){
          ...
      }
      

7.3. Specialized AJAX Client SubClass

Having built a generic client class to invoke document services in general, we now need to subclass the functionality to build a client that will send and receive documents from a web service in particular. In the case of this project, we will build a client for the web service we have generated from the WSDL and XSD design documents.

The key featues of this client will be:

    *

      A constructor or method that enables us to specify the URL of the web service.
    *

      Semantics to derive a sub-class from the base class.
    *

      A method that enables us to build a wrapped document to send to a particular operation of the web service.
    *

      An overriden implementation of the "reponse" method of the base class. This is where we can implement specific handling of the response from the operation that was invoked.

These 4 key featrues are implemented by the following code:

First, the constructor will specify the URL end point for the web service:

        /**
        * Construct a new CmiServiceClient class.
        * 
        * @class This class represents an instance of CmiServiceClient .
        * @extends AjaxClient
        * @constructor
        * @param {String} str_HttpAction the GET or POST action
        * @param {String} str_ServiceEndPoint the web service endpoint URL
        * @param {Boolean} b_Async flag for asynchronous call
        * @return A new instance of com.icodeon.CmiServiceClient 
        */
        com.icodeon.CmiServiceClient = function(
          str_HttpAction,
          str_ServiceEndPoint,
          b_Async
        ){
        ...
        }
      

Second, to support inheritence the sub-class constructor will ensure there is a call to the super-class constructor and the inheritence relationship will be explicitly defined:

        com.icodeon.AjaxClient.call(this);
        ...
        com.icodeon.CmiServiceClient.prototype = new com.icodeon.AjaxClient();
        com.icodeon.CmiServiceClient.prototype.constructor = com.icodeon.CmiServiceClient;
      

Third, we will build the wrapped document as DOM object, using the Sarissa utility library to help with cross-browser neutral implementation of DOM functionality:

        // Create the SOAP Envelope
        var obj_DomDoc = Sarissa.getDomDocument("http://schemas.xmlsoap.org/soap/envelope/","Envelope");
        var elm_Document = obj_DomDoc.documentElement;

        // Add the header
        var elm_Header = this.createNewElement("Header", "http://schemas.xmlsoap.org/soap/envelope/", obj_DomDoc);
        elm_Document.appendChild(elm_Header);
        
        // Add the body
        var elm_Body = this.createNewElement("Body", "http://schemas.xmlsoap.org/soap/envelope/", obj_DomDoc);
        elm_Document.appendChild(elm_Body);
      

Fourth, we will override the inherited "response" method so that we can implement specific handling of the document sent in response from the web service. In this implementation, the response method callsback to a further function that will take over the processing of the response:

        this.response = function(doc_ResponseXML, obj_Error){
        ...
          callback.call();
        }
      

Now that these four elements of the client are in place, further utlity methods can be added, and the class is ready to be instantiated within a HTML and JavaScript only web page:

        // Create the AJAX client
        obj_Client = new com.icodeon.CmiServiceClient(
          str_HttpAction,
          str_ServiceEndPoint,
          b_Async);
          
        // Invoke a named web service operation, and name a function to recieve the response
        obj_Client.invoke(str_OperationName, fn_CallBack);   
      

Chapter 8. Conclusion

Table of Contents

8.1. Project Outcomes

8.1. Project Outcomes

    *

      The document literal wrapped web service style has proved challenging to build using tools available for the Java platform. Icodeon have found that the equivalent tools available for the .NET platform have been easier to use by several orders of magnitude, with greatly improved times for code generation.
    *

      No single Java tool was found to be sufficient. Instead, combining the outputs from several "best of breed" tools, each dedicated to a specific task was found to be successful: Apache Axis for web services description files (WSDL), and Castor for the XSD schemas linked to the WSDL files.
    *

      The Apache JMeter load testing tool can be used very successfully for both functional testing and stress testing of document style web services. The response document from a web service call can be dynamically queried and the results added to a parameterized version of a document for a subsequent web service request.
    *

      The toolset for building sophisticated client side code in the JavaScript programming language is becoming mature. In this project many of the features of traditional object oriented code (classes, encapsulation, inheritance etc) along with Unit Test frameworks and Documentation Generation tools were employed to build AJAX clients for document based web services.

Icodeon Ltd

July 2007
This document was created using the &gt;e-novative&gt; DocBook Environment (eDE) 
</div>

twio very useful struts2 interceptors

ScopeInterceptor

 org.apache.struts2.interceptor.ScopeInterceptor


This is designed to solve a few simple issues related to wizard-like functionality in Struts. One of those issues is that some applications have a application-wide parameters commonly used, such pageLen (used for records per page). Rather than requiring that each action check if such parameters are supplied, this interceptor can look for specified parameters and pull them out of the session. 

This works by setting listed properties at action start with values from session/application attributes keyed after the action's class, the action's name, or any supplied key. After action is executed all the listed properties are taken back and put in session or application context. 

To make sure that each execution of the action is consistent it makes use of session-level locking. This way it guarantees that each action execution is atomic at the session level. It doesn't guarantee application level consistency however there has yet to be enough reasons to do so. Application level consistency would also be a big performance overkill. 

Note that this interceptor takes a snapshot of action properties just before result is presented (using a PreResultListener), rather than after action is invoked. There is a reason for that: At this moment we know that action's state is "complete" as it's values may depend on the rest of the stack and specifically - on the values of nested interceptors. 

Interceptor parameters: 

session - a list of action properties to be bound to session scope 
application - a list of action properties to be bound to application scope 
key - a session/application attribute key prefix, can contain following values: 
CLASS - that creates a unique key prefix based on action namespace and action class, it's a default value 
ACTION - creates a unique key prefix based on action namespace and action name 
any other value is taken literally as key prefix 
type - with one of the following 
start - means it's a start action of the wizard-like action sequence and all session scoped properties are reset to their defaults 
end - means that session scoped properties are removed from session after action is run 
any other value or no value means that it's in-the-middle action that is set with session properties before it's executed, and it's properties are put back to session after execution 
sessionReset - name of a parameter (defaults to 'session.reset') which if set, causes all session values to be reset to action's default values or application scope values, note that it is similar to type="start" and in fact it does the same, but in our team it is sometimes semantically preferred. We use session scope in two patterns - sometimes there are wizard-like action sequences that have start and end, and sometimes we just want simply reset current session values. 
reset - boolean, defaults to false, if set, it has the same effect as setting all session values to be reset to action's default values or application. 
autoCreateSession - boolean value, sets if the session should be automatically created. 
Extending the interceptor: 

There are no know extension points for this interceptor. 

Example code: 

 
 <!-- As the filter and orderBy parameters are common for all my browse-type actions,
      you can move control to the scope interceptor. In the session parameter you can list
      action properties that are going to be automatically managed over session. You can
      do the same for application-scoped variables-->
 <action name="someAction" class="com.examples.SomeAction">
     <interceptor-ref name="basicStack"/>
     <interceptor-ref name="hibernate"/>
     <interceptor-ref name="scope">
         <param name="session">filter,orderBy</param>
         <param name="autoCreateSession">true</param>
     </interceptor-ref>
     <result name="success">good_result.ftl</result>
 </action>

ServletConfigInterceptor

org.apache.struts2.interceptor.ServletConfigInterceptor


An interceptor which sets action properties based on the interfaces an action implements. For example, if the action implements ParameterAware then the action context's parameter map will be set on it. 

This interceptor is designed to set all properties an action needs if it's aware of servlet parameters, the servlet context, the session, etc. Interfaces that it supports are: 

ServletContextAware 
ServletRequestAware 
ServletResponseAware 
ParameterAware 
RequestAware 
SessionAware 
ApplicationAware 
PrincipalAware 
Interceptor parameters: 

None 
Extending the interceptor: 

There are no known extension points for this interceptor. 

Example code: 

 
 <action name="someAction" class="com.examples.SomeAction">
     <interceptor-ref name="servletConfig"/>
     <interceptor-ref name="basicStack"/>
     <result name="success">good_result.ftl</result>
 </action>
 
 
See Also:
ServletContextAware
ServletRequestAware
ServletResponseAware
ParameterAware
SessionAware
ApplicationAware
PrincipalAware

to automatically pop the parameters inside action context(request, or session etc),into the action class. besides, scopeInterceptor can be used to update the session attribute automatically.

nice explanation on difference on hot swap & hot deployment

Hot deployment and hot swap are completly different beasts, Valeri. Hot swap is the ability to change class definitions while a VM is running, without the application ever noticing that. This is provided so you can reload quick changes to a class while you’re debugging. Hot swap is provided by the JVM, and while quite useful in server-side development, has no direct link to J2EE or application servers. Hot swap also has a number of limitations, as you already discovered.

Now, hot deployment is the ability to reload an entire J2EE application without bringing down the container. This is implemented by application servers is very different ways, and has no direct links to the JVM.

==================================

In general, all situations where you don’t have a clear separation between the development stage and the running application are difficult to handle. A J2EE application cannot modify its own configuration at runtime. The deployment descriptors are parsed and interpreted at the time of deployment but afterwards they are cast in stone. There’s no API to modify those aspects of the system that are in the deployment descriptors. If you want to dynamically change any of those aspects at runtime, you have to work around J2EE or resort to coding to extremely low level J2EE SPI interfaces like writing your own JCA resource adapter.

from http://www.theserverside.com/news/thread.tss?thread_id=26044

very good and detailed explanation on jspx

http://diaryproducts.net/about/programming_languages/java/convert_jsp_pages_to_jsp_documents_jspx_with_jsp2x

Convert JSP pages to JSP documents (JSPX) with Jsp2x
Submitted by Hannes Schmidt on Thu, 01/17/2008 - 19:01.

Jsp2X is a command line utility for batch conversion of JSP pages to JSP documents, i.e. JSPs in well-formed XML syntax (aka JSPX, see chapter 5 of the JavaServer PagesTM 1.2 Specification and chapter 6 of the JavaServer PagesTM 2.0 Specification). It is written in Java and incorporates a parser derived from a combined JSP+XHTML grammar using the ANTLR parser generator. It tries very hard to create JSPX output that portable across engines. Jsp2X was designed to be used in an iterative fashion in which it alerts the user of potential problems in the input.
Introduction

Version 1.2 of the JSP standard introduces the notion of JSP documents which are simply JSP files in well-formed XML syntax. Files in traditional JSP format, also known as JSP pages contain a more or less free-form tag soup for which parsers are difficult to write and which are therefore hard to digest in an automated manner. It took a long time until the various JSP engine vendors agreed on what was valid JSP and what wasn't. I usually prefer the Jetty servlet container for testing a web application during development because it starts up quickly which reduces the time it takes to switch between coding and testing an application. When I later deploy that application to Resin I am bewildered to see Resin reject the JSPs that worked flawlessly in Jetty. An upgrade to Resin 3.0.23 fixes many discrepancies but I still end up tweaking my JSP pages to make them work in both containers.

JSP documents are well-formed XML. XML has a strict and precise (albeit verbose) syntax. There are plenty of parsers and other tools available for XML. Making your JSP files XML-compliant therefore opens a world of possibilities for further processing. For example, I have haven't found a single JSP editor that correctly formats and highlights anything but the simplest pages. With JSP documents these problem have a trivial solution: use your favorite XML editor.

Another annoying trait of JSP pages is that the JSP engine preserves insignificant whitespace. A JSP parser only parses what looks like a JSP tag or a directive even if the text in between is well-formed XML. For that reason it can't detect and remove whitespace that would be considered insignificant by XML or HTML standards. This unnecessarily increases the size of the emitted HTML. The more JSP code is factored out into tag files or included JSP fragments, the more insignificant whitespace generated and sent to the browser. In JSP documents, on the other hand, it is very easy to detect and drop insignificant whitespace. In fact, if the JSP engine uses an XML parser to read the input, the parser will take care of whitespace on behalf of the engine. To give you a rough idea about the potential savings: after I converted all 70+ JSP pages and tag files of a well-factored 100k SLOC web application to JSP documents, the average size of the HTML output decreased by 50% to 75%!

Taking into account that the template text in most JSP pages is in fact XHTML or HTML the JSP committee realized that it isn't a very long road from a JSP page to a well-formed XML document. They only had to get rid of the leniency in the JSP parser and come up with alternatives for crazy constructs like <a href="<c:url …>"> . This thought process led to the definition of JSP documents in the JSP standard at time when millions of JSP pages had already been written an deployed. This is where Jsp2X comes in. It is a tool that assists in the conversion of JSP pages to JSP documents, a process that is generally straight-forward but tends to be tedious and has the potential to introduce subtle errors when executed by hand.

To understand what JspX does you need to keep in mind that unlike a JSP engine, Jsp2X parses both the JSP tags and the template text in between those tags. In that respect Jsp2X incorporates a more complex parser than what you'd find in a typical JSP engine (luckily, I had a very powerful and yet easy-to-use tool at hand: ANTLR, a robust LL(*) parser generator). More importantly, Jsp2X can successfully parse the template text in your JSP pages only if it is reasonably correct XHTML. Jsp2X doesn't expect fully well-formed XML template text. It requires that all tags are nested properly and that empty tags are closed correctly. There is no need for a single root element - Jsp2X will create one on-the-fly if necessary.
Where can I get it?

The latest binary and source distributions be downloaded from this page. To compile the sources you need Maven version 2.0.7 and a JDK 1.6.0_02. Older Maven 2.0 releases >= 2.0.4 may work as well and a recent 1.5 JDK should be fine, too. Jsp2X is released under the LGPL.

The usage of the binary distribution is described in section Usage.

The source code repository is hosted at Google Code.
What exactly does it do?

A conversion of a single JSP page requires a number of different transformations. The following is a hopefully complete list:

    * Jsp2X writes the converted input to an output file whose name is derived from the input file. The extension of the output file name is mapped according to what the JSP standard lists as standard extensions for JSP pages/documents, tag files and fragments (also see Usage).
    * Jsp2X adds four very short utility tag files to the converted project. They have the jspx: prefix and contain functionality that would otherwise clutter the converted JSP document.
    * Jsp2X wraps the JSP page in a <jsp:root> tag.
    * Jsp2X wraps JSP fragments into a <jspx:fragment> tag. <jsp:root> tags in fragments are disallowed so I had to come up with another tag that is transparent with respect to the generated output and that can be used to collect the potentially many top-level elements of a fragment underneath a single top-level element (a requirement of XML well-formedness).
    * Jsp2X converts all taglib declarations to name space references on the new root element (<jsp:root> or <jsp:fragment>). Unused taglibs are omitted. Jsp2X even detects taglibs that are declared in a fragment that is included by the JSP page to be converted. JSP page authors often move their taglib declarations to a separate file that is then included at the top of every JSP page.
    * Jsp2X escapes special XML characters in the input. Keep in mind that an JSP document is parsed twice, once by the JSP engine's XML parser and once on the client side by the browser's HTML/XHTML parser. If you wanted to display a literal < on a page, it was sufficient to put the HTML entity < into the JSP page because the entity had no special meaning to the JSP parser. A JSP document would have to read &lt; to get the desired effect. The JSP parser will substitute & with & such that the browser gets the intended &lt ; and renders that as < . Jsp2X does the necessary escaping for you.
    * Jsp2X wraps template text in <jsp:text> tags, excluding insignificant whitespace.
    * Jsp2X escapes HTML comments and converts JSP comments to XML comments with the intended effect that HTML comments will end up in the output whereas JSP comments do not.
    * Jsp2X wraps scriptlets and expressions in <jsp:scriptlet> and <jsp:expression> tags respectively.
    * Jsp2X inserts escaped HTML comments into the body of elements with empty bodies to prevent them from being collapsed into empty element: <td></td> becomes <td><!----&gt</td> . This is definitely noisy but I found no other way to prevent the JSP engine's XML parser from collapsing empty element bodies. One of the goals for Jsp2X was to preserve the intent of a JSP page as much as possible. Luckily, a typical HTML page doesn't contain that many empty elements so the added syntactic noise will be minimal.
    * Jsp2X tries to detect and convert dynamic attribute constructs. The detection of these constructs is not bullet-proof because Jsp2X does not have a full-blown EL expression parser. Instead it uses regexes to detect the most common cases. The table below lists the supported cases (with additional whitespace and indention for clarity).
      JSP page 	JSP document

      <foo x="<bar …>">
          …
      </foo>

      	

      <jspx:element name="foo">
          <jspx:attribute name="x"/><bar…></jspx:attribute>
          <jspx:body>…<jspx:body>
      </jspx:element>

      <foo <c:if test="…">x="…"<c:if>>
          …
      </foo>

      	

      <jspx:element name="foo">
          <c:if test="…">
              <jspx:attribute name="x"/>…</jspx:attribute>
          <c:if>
          <jspx:body>…<jspx:body>
      </jspx:element>

      <foo ${condition : 'x="…"' ? ''}>
          …
      </foo>

      	

      <jspx:element name="foo">
          <c:if test="${condition}">
              <jspx:attribute name="x"/>…</jspx:attribute>
          <c:if>
          <jspx:body>…<jspx:body>
      </jspx:element>

      <foo ${condition : '' ? 'x="…"'}>
          …
      </foo>

      	

      <jspx:element name="foo">
          <c:if test="${!(condition)}">
              <jspx:attribute name="x"/>…</jspx:attribute>
          <c:if><jspx:body>…<jspx:body>
      </jspx:element>

      <foo ${condition : 'x="…"' ? 'y="…"'}>
          …
      </foo>

      	

      <jspx:element name="foo">
          <c:choose>
              <c:when test="${condition}">
                  <jspx:attribute name="x"/>…</jspx:attribute>
              <c:when>
              <c:otherwise>
                  <jspx:attribute name="y"/>…</jspx:attribute>
              </c:otherwise>        
          </c:choose>
          <jspx:body>…<jspx:body>
      </jspx:element>

    * Jsp2X rewrites the file extension in references to an included file as long as the included file is also listed as an input file. This is why you should convert all JSP files in a single invocation of Jsp2X. If you don't Jsp2X will not be able to rewrite references to converted files.
    * Jsp2X converts DOCTYPE declarations to <jsp:output> elements.

You might notice the use of <jspx:element> and <jspx:attribute> tags where you'd expect JSP's built-in <jsp:element> and <jsp:attribute> tags. The reason is that the built-in mechanism doesn't work for conditional attributes (something I consider a blatant oversight in the standard). For example,

<jsp:element …><c:if …><jsp:attribute …>…</jsp:attribute></c:if></jsp:element>

doesn't work because the attribute element applies to the <c:if> tag, not the <jsp:element> tag. It is in accordance with the standard but the standard should have been written to accommodate this very common use case. Jsp2X creates several tag files with custom tags that have similar functionality to <jsp:element> , <jsp:attribute> and <jsp:body> but work for conditional attributes:

<jspx:element name="foo"><c:if …><jspx:attribute name="bar">…</jsp:attribute></c:if></jsp:element> .

Another difference is that <jspx:element> distinguishes between empty tags and tags with empty bodies. For example, a JSP page with

<jspx:element name="foo"><jsp:body/></jsp:element>

will emit <foo></foo> and

<jspx:element name="foo"></jsp:element> or <jspx:element name="foo"/>

will emit <foo/> . The jsp: variant would have emitted <foo/> in either case. This is XML-compliant but violates HTML (not XHTML) in which <div></div> and <div/> are treated differently. The latter is actually disallowed and the its effect differs from browser to browser. FF treats it like an opening <div> and implicitly closes it at the end of the parent tag, e.g.

<td><div class="a"/><div>foo</div><td> is treated like

<td><div class="a"><div>foo</div></div></td> .

IE7 simply ignores everything after the <div/> .

The use of Jsp2X's custom <jspx:element> instead of the built-in <jsp:element> assists in creating output that is more likely to preserve the JSP page author's intent. It also enables the use of HTML (albeit a somewhat stricter dialect of it) as opposed restricting the template text to pure XHTML.
Requirements

    * mandatory: JDK 5 or higher
    * recommended: JSP files named with standardized extensions ( .tag , .jsp and .jspf .
    * recommended: Access to the complete set of all JSP files that comprise the web application (i.e. everything underneath the WEB-INF directory).
    * recommended: The include directives in every input JSP page should use context-relative URIs to refer to other JSP files (as in /WEB-INF/jsp/taglibs.jspf ).

Usage

Jsp2X is distributed as an executable JAR file. It is invoked as follows:

# java -jar <path to distribution jar> …

Invoking it with --help shows the command line options.

# java -jar jsp2x-VERSION-bin.jar --help
Usage:
Jsp2X [--help] [-c|--clobber] [(-o|--output) <output>] file1 file2 … fileN

Converts JSP pages to JSP documents (well-formed XML files with JSP tags).


[--help]
Prints this help message.

[-c|--clobber]
Overwrite output files even if they already exist.

[(-o|--output) <output>]
The path to the output folder. By default output files and logs are
created in the same directory as the input file.

file1 file2 … fileN
One or more paths to JSP files. Should not be absolute paths.

Unless you specify --clobber , Jsp2X will never overwrite existing files. For every input file it will create a converted output file and possibly a log file in the same directory of the input file unless the --output switch is specified. With --output <path> , output files are written to a directory structure underneath the directory specified by <path>. The directory structure will mimic the one of the input files and non-existing directories will be created on the fly as required. The base name of the output file will be derived from the input file using the following mapping between standard JSP page extensions and standard JSP document extensions:
Input extension 	Output extension
jsp 	jspx
tag 	tagx
jspf 	jspx

If the input file's extension doesn't match any of the ones listed in above table, Jsp2X will generate the output file name simply by appending .xml to the input file name.

Input file paths should always be relative paths. They must be relative paths if --output is specified. If they are relative paths they may start with './' but they don't need to, e.g. ./foo/bar.jsp is treated equivalent to foo/bar.jsp . JSP pages may include other JSP fragments. Jsp2X can handle this as long as the value of every include directive's uri attribute should point to the included file when prepending the uri value with the current working directory. In other words, you should

    * run Jsp2X from with the webapp directory of your source tree (usually src/main/webapp ) and
    * your JSP pages use context-relative URIs to refer to the included fragment, e.g. /WEB-INF/jsp/taglibs.jspf .

In all other cases Jsp2X will emit a warning and the conversion result might be incomplete.

A typical conversion session might look like this:

# cd src/main/webapp
# find -name "*.tag" -or -name "*.jsp" -or -name | 
  xargs java -jar jsp2x-VERSION-bin.jar --clobber
# cd ../../..


Jsp2X will print the total number of input files and the number of successfully converted input files. You will find as many log files as there are input files for which the conversion was unsuccessful. Read the log files and tweak the input pages or come running to me if you think you found a bug.

When converting the JSP pages in Provider Portal, I used a slightly more elaborate approach that yielded better diffs in SVN. The key to that approach is that I first renamed the JSP pages to their JSP document counterparts in one commit then replaced the content of the renamed file with its converted form in a second commit. The diff of the second commit lists all modifications made by Jsp2X allowing you to later go back and see what exactly it did. Here's a transcript of my conversion session (before you copy-and-paste it make sure you understand what's going on with all those find commands):

   1. Convert all JSP files into a separate temporary directory:

      # cd src/main/webapp
      # find -name "*.tag" -or -name "*.jsp" -or -name | 
        xargs java -jar jsp2x-VERSION-bin.jar --clobber --output temp


   2. Use find to generate a script that renames all JSP files:

      # find \( -name  -or -name "*.jspf" \) -and -printf | 
        sed -r "s/jspf?\$/jspx/" | bash
      # find -name "*.tag" -and -printf "svn rename %p %p\\n" | sed -r  | bash
      # svn commit -m "…"

   3. Use find to generate another script that copies the converted files from the tempotary directory to the real one:

      # cd temp/WEB-INF
      # find \( -name "*.tagx" -or -name "*.jspx" \) -and -printf  | sed s/\\/\\.\\//\\// | bash
      # cd ../..
      # rm -r temp
      # svn commit -m "…"

How it works

Jsp2X is split into four main parts: the parser, the transformer, the dumper and the main class with some glue code for command line and file management. The parser was hardest to get right because unlike a true JSP page parser it can't just scan the template text for JSP constructs. The transformer needs a complete tree structure of the input including the tags in the template text. So the parser has to scan for markup in the template textand JSP constructs at the same time. The input is not just simple markup with elements, attributes and some text. JSP constructs can literally occur anywhere in the document. The parser needs to accept input like this:

<a href="<c:url value="foo"/>" ${isBold ? 'class="bold"' : ''}>

This is an <a> element with an href attribute whose value is a <c:url> tag which has more attributes. Next to the href attribute there is an EL expression with a conditional class attribute. I refer to these constructs as being recursive because tags are allowed within tags (this is different to elements occurring in the body of other elements). Also note the nesting of the quotation marks. As you can see, parsing this is not trivial. Luckily, I had a very powerful tool at hand: ANTLR. Given the grammar of an input language ANTLR generates the Java source code of a class that can parse the input language and turn it into an in-memory tree representing the input. So as long as you can come up with a grammar for the desired input, ANTLR generates a program that parses the input for you. ANTLR can generate source code for Java, C#, C and other languages. It supports complex LL(*) grammars (any context-free language if you know who Chomsky is) in which the decision about which grammar rule to apply can not be made by just looking a constant number of tokens ahead (it uses backtracking in conjunction with memoization to alleviate the exponential cost of backtracking). I am an ANTLR newbie so I expect my JSP grammar to have deficiencies.

The transformer is a simple recursive tree walker that can change, delete and add nodes during the walk. Most of the work is done in a first pass. It also detects and converts the afore-mentioned recursion in attributes and tags. The second pass combines consecutive PCDATA (i.e. text) nodes and escapes XML entities. The third pass attempts to detect insignificant whitespace. For example, it converts

<td>
    Foo
</td>

to

<td>
    <jsp:text>Foo<jsp:text>
<td>

The difference between the two fragments is that the first one would cause the JSP engine to emit HTML output that includes the whitespace:

<td>
    Foo
</td>

The second fragment on the other hand would emit

<td>Foo</td>

This is because the whitespace around "Foo" became whitespace-only text between tags and can be safely eliminated by the JSP engine. The text child of the <td> element in the first fragment contains both whitespace and non-whitespace. The JSP standard says that in JSP documents only text that exclusively consists of whitespace can be eliminated.

The dumper is a very simple XML serializer. After the transformer did its work, the tree is basically in XML form and serializing it is a trivial task. ANTLR supports tree parsing to some extent so I used that mechanism for the dumper.

There's not much to say about the main class, except maybe that it uses a neat little command line parser called JSAP.

yet another explanations,
http://download.oracle.com/javaee/1.4/tutorial/doc/JSPX3.html

weblogic application.xml

i just finished solving another simple task.

problem description:
there are three projects inside the application. Front_End, Pricing_Admin_Engine, and Help_Context.

deployment of previous two applications have no problem, since they are standard Java EE applications. deployment Help_Context have problems. As the project structure is

the root of this project is CFC_Static_Content, and the required resource is at CFC_Static_Content\cfc_static\help\db\enUS\content\cfc.xml
once deployed as it is, the url would be
CFC_Static_Content/cfc_static/help/db/enUS/content/cfc.xml, while required is /help/db/enUS/content/cfc.xml.

for some, this is a very simple problem, and the solution is also very simple, just add a META-INF folder, to contain new created application.xml, inside which specifiy as

<?xml version="1.0" encoding="UTF-8"?>
<application id="CFC" version="5"
	xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/application_5.xsd">
	<display-name>CFC Static Content</display-name>
	<module id="WebModule_CFC_Help">
		<web>
			<web-uri>cfc_static/help</web-uri>
			<context-root>/help</context-root>
		</web>
	</module>
</application>

jasper report

the own official web site provides the best documentation.

Compiling Report Designs
(back to top)
A report design is represented by a JRXML file that has the structure defined in the jasperreport.xsd file, or by an in-memory JasperDesign object. In order to generate reports according to such a report design, this needs to be compiled. Report design compilation can be done using the compileReportXXX() methods exposed by the JasperCompileManager class and results into a *.jasper file or a JasperReport object.
When compiling a report design, the engine first performs a validation to ensure that the template (JasperDesign) is consistent and then transforms all the report expressions in a ready-to-evaluate form, storing them inside the resulting compiled report template (JasperReport or .jasper file).
This transformation implies either the on-the-fly compilation of a Java class file that will be associated with the report template, or the generation of a Groovy or BeanShell script to use when evaluating report expressions during the report filling process, depending on the specified report expression language for the report (see the language property of a report template).

Before reading more about report compilation, you should understand when do you need to compile your report templates and which is the best way to do it by reading the following FAQ:
When should I compile my report templates and how?.

To make report design compilation process as flexible as possible, a special interface called JRCompiler was introduced.
As seen above, there are several different types of classes implementing this interface shipped with the library:
1. Java creport compilers. These report compilers generate and compile a Java class containing the report expressions evaluating methods; \\
2. The Groovy report compiler that generates a script for runtime report expressions evaluation.
3. The BeanShell report compiler that generates a script for runtime report expressions evaluation.

For more details about report compilation, check The Ultimate Guide to JasperReports.

Ant task for compiling report designs

Since the report design compilation process is more like a design-time job than a runtime one, an Ant task was provided with the library in order to simplify development.
This Ant task is implemented by the JRAntCompileTask and is very similar to the Ant built-in task, as far as syntax and behavior are concerned.
The report design compilation task can be declared like this, in a project’s build.xml file:

In the example above, the lib folder should contain the jasperreports-.jar file along with its required libraries (including the jdt-compiler-.jar, which is the recommended report compiler, in case you use Java as the report expression language).
This user-defined Ant task can be then used to compile multiple JRXML report design files in a single operation, by specifying the root directory that contains those files or by selecting them using file patterns. Attributes of the report design compilation task:
Attribute Description
srcdir Location of the JRXML report design files to compile. Required unless nested elements are present.
destdir Location to store the compiled report design files (the same as the source directory by default).
compiler Name of the class that implements the JRCompiler interface (optional).
xmlvalidation Flag to indicate if the XML validation should be performed on the source report design files (true by default).
tempdir Location to store the temporary generated files (the current working directory by default).
keepjava Flag to indicate if the temporary Java files generated on-the-fly should be kept and not deleted automatically (false by default).

The report design compilation task supports nested and elements, just like the Ant built-in task.

To see this in action, check the demo/samples/antcompile sample provided with the project source files.

Viewing a report design

Report designs can be viewed using the JasperDesignViewer application.
In its main() method, it receives the name of the file which contains the report design to view.
This can be the JRXML file itself, or the compiled report design (*.jasper file).

Filling Reports
(back to top)
A compiled report design can be used to generate reports by calling the fillReportXXX() methods of the JasperFillManager class. There are two flavours of the fill methods in this fa?ade class. Some receive a java.sql.Connection object as the third parameter, and the others receive a JRDataSource object instead.

This is because most of the times reports are filled with data from a relational database to which we connect through JDBC and is very convenient to have the SQL query inside the report template itself. The JasperReports engine can use the connection passed in and execute the SQL query, thus producing a report data source for filling the report.
In cases where data is available in other forms, the fill methods receiving a data source are to be used.

View, Print and Export Reports
(back to top)
Generated reports can be viewed using the JasperViewer application. In its main() method, it receives the name of the file which contains the report to view.

Generated reports can be printed using the printReport(), printPage() or printPages() static methods exposed by the JasperPrintManager class.

After having filled a report, we can also export it in PDF, HTML or XML format using the exportReportXXX() methods of the JasperExportManager class.

Parameters
(back to top)
Parameters are object references that are passed-in to the report filling operations. They are very useful for passing to the report engine data that it can not normally find in its data source. For example, we could pass to the report engine the name of the user that has launched the report filling operation if we want it to appear on the report, or we could dynamically change the title of our report.

An important aspect is the use of report parameters in the query string of the report, in order to be able to further customize the data set retrieved from the database. Those parameters could act like dynamic filters in the query that supplies data for the report.
Declaring a parameter in a report design is very simple and it requires specifying only its name and its class:

<parameter name=”ReportTitle” class=”java.lang.String”/>
<parameter name=”MaxOrderID” class=”java.lang.Integer”/>
<parameter name=”SummaryImage” class=”java.awt.Image”/>

There are two possible ways to use parameters in the query:

1. The parameters are used like normal java.sql.PreparedStatement parameters using the following syntax:

SELECT * FROM Orders WHERE CustomerID = $P{OrderCustomer}

2. Sometimes is useful to use parameters to dynamically modify portions of the SQL query or to pass the entire SQL query as a parameter to the report filling routines. In such a case, the syntax differs a little, like in the following example:

SELECT * FROM Orders ORDER BY $P!{OrderByClause}

There are also the following built-in system parameters, ready to use in expressions:

Data Source
(back to top)
JasperReports support various types of data sources using a special interface called JRDataSource.
There is a default implementation of this interface, the JRResultSetDataSource class, which wraps a java.sql.ResultSet object. It allows the use of any relational database through JDBC.
When using a JDBC data source, you could pass a java.sql.Connection object to the report filling operations and specify the query in the report definition itself (see the <queryString> element in the XML file) or could create a new instance of the JRResultSetDataSource by supplying the java.sql.ResultSet object directly.
With other types of data sources, things should not be different and all you have to do is to implement the JRDataSource interface, or use one of the implemetations that are shipped with the JasperReports library to wrap in-memory collections or arrays of JavaBeans, CSV or XML files, etc.

Fields
(back to top)
Report fields represent the only way to map data from the data source into the report generating routines. When the data source of the report is a java.sql.ResultSet, all fields must map to corresponding columns in the java.sql.ResultSet object. That is, they must have the same name as the columns they map and a compatible type.
For example:
If we want to generate a report using data retrieved from the table Employees, which has the following structure:
Column Name Datatype Length
EmployeeID int 4
LastName varchar 20
FirstName varchar 10
HireDate datetime 8
We can define the following fields in our report design:

<field name=”EmployeeID” class=”java.lang.Integer”/>
<field name=”LastName” class=”java.lang.String”/>
<field name=”FirstName” class=”java.lang.String”/>
<field name=”HireDate” class=”java.util.Date”/>

If we declare a field that does not have a corresponding column in the java.sql.ResultSet, an exception will be thrown at runtime. Columns present in the java.sql.ResultSet object that do not have corresponding fields in the report design do not affect the report filling operations, but they also won’t be accessible.

Expressions
(back to top)
Expressions are a powerful feature of JasperReports. They can be used for declaring report variables that perform various calculations, for data grouping on the report, to specify report text fields content or to further customize the appearance of objects on the report.
Basically, all report expressions are Java expressions that can reference report fields and report variables.
In an XML report design there are several elements that define expressions: <variableExpression>, <initialValueExpression>, <groupExpression>, <printWhenExpression>, <imageExpression> and <textFieldExpression>.
In order to use a report field reference in an expression, the name of the field must be put between $F{ and } character sequences.
For example, if we want to display in a text field, on the report, the concatenated values of two fields, we can define an expression like this one:

<textFieldExpression>
$F{FirstName} + ” ” + $F{LastName}
</textFieldExpression>

The expression can be even more complex:

<textFieldExpression>
$F{FirstName} + ” ” + $F{LastName} + ” was hired on ” +
(new SimpleDateFormat(“MM/dd/yyyy”)).format($F{HireDate}) + “.”
</textFieldExpression>

To reference a variable in an expression, we must put the name of the variable between $V{ and } like in the example below:

<textFieldExpression>
“Total quantity : ” + $V{QuantitySum} + ” kg.”
</textFieldExpression>

There is an equivalent syntax for using parameters in expressions. The name of the parameter should be put between $P{ and } like in the following example:

<textFieldExpression>
“Max Order ID is : ” + $P{MaxOrderID}
</textFieldExpression>

Variables
(back to top)
A Report variable is a special objects build on top of an expression. Variables can be used to simplify the report design by declaring only once an expression that is heavily used throughout the report design or to perform various calculations on the corresponding expressions.
In its expression, a variable can reference other report variables, but only if those referenced variables were previously defined in the report design. So the order in which the variables are declared in a report design is important.
As mentioned, variables can perform built-in types of calculations on their corresponding expression values like : count, sum, average, lowest, highest, variance, etc.
A variable that performs the sum of the Quantity field should be declared like this:

<variable name=”QuantitySum”
class=”java.lang.Double” calculation=”Sum”>
<variableExpression>$F{Quantity}</variableExpression>
</variable>

For variables that perform calculation we can specify the level at which they are reinitialized. The default level is Report and it means that the variable is initialized only once at the beginning of the report and that it performs the specified calculation until the end of the report is reached. But we can choose a lower level of reset for our variables in order to perform calculation at page, column or group level.
For example, if we want to calculate the total quantity on each page, we should declare our variable like this:

<variable name=”QuantitySum” class=”java.lang.Double”
resetType=”Page” calculation=”Sum”>
<variableExpression>$F{Quantity}</variableExpression>
<initialValueExpression>new Double(0) </initialValueExpression>
</variable>

Our variable will be initialized with zero at the beginning of each new page.
There are also the following built-in system variables, ready to use in expressions:

PAGE_NUMBER
COLUMN_NUMBER
REPORT_COUNT
PAGE_COUNT
COLUMN_COUNT
GroupName_COUNT

Documentation pending…

Report Sections
(back to top)
When building a report design we need to define the content and the layout of its sections. The entire structure of the report design is based on the following sections:

<background>
<title>
<pageHeader>
<columnHeader>
<groupHeader>
<detail>
<groupFooter>
<columnFooter>
<pageFooter>
<lastPageFooter>
<summary>
<noData>

Sections are portions of the report that have a specified height and width and can contain report objects like lines, rectangles, images or text fields.
When declaring the content and layout of a report section in an XML report design we use the generic element <band>.
This is how a page header declaration should look. It contains only a line object and a static text:

<pageHeader>
<band height=”30″>
<rectangle>
<reportElement x=”0″ y=”0″ width=”555″ height=”25″/>
<graphicElement/>
</rectangle>
<staticText>
<reportElement x=”0″ y=”0″ width=”555″ height=”25″/>
<textElement textAlignment=”Center”>
<font fontName=”Helvetica” size=”18″/>
</textElement>
<text>Northwind Order List</text>
</staticText>
</band>
</pageHeader>

Subreports
(back to top)
Subreports are an important feature for a report-generating tool. They allow the creation of more complex reports and simplify the design work.
Subreports are very useful when creating master-detail type of reports, or when the structure of a single report is not sufficient to describe the complexity of the desired output document.
A subreport is in fact a normal report incorporated into another report. One can overlap subreports or create subreports containing subreports themselves, up to any nesting level. Any report template can be used as a subreport when incorporated into another report, without anything inside it having to change.
Like other report elements, the subreport element has an expression that is evaluated at runtime in order to obtain the source of the JasperReport object to load.
There are two ways to supply parameter values to a subreport. First, you can use the <parametersMapExpression> element which introducest the expression that will be evaluated to produce the specified parameters map. And/or you can supply the parameters individually using a <subreportParameter> element for each relevant parameter. When using simultaneously both ways to supply parameters the parameters values specified using <subreportParameter> will override values specified with <parametersMapExpression>.
Subreports require a data source in order to generate their content, just like normal reports, behave in the same way, and expect to receive the same kind of input when they are being filled.
Values calculated by a subreport can be returned to the parent report. More specifically, after a subreport is filled, values of the subreport variables can be either copied or accumulated (using an incrementer) to variables of the caller report (master variables).
See the Subreport sample for details.

from : http://jasperforge.org/uploads/publish/jasperreportswebsite/trunk/tutorial.html

jasper report–example talks louder

Introduction

I’ve recently been researching reporting tools for a project I will be soon be working on. One of the tools I’ve been looking at is JasperReports. JasperReports is a very popular open source (LGPL) reporting library written in Java. Unfortunately it is not very well documented and I had a hard time coming up with a simple report. After some research, I was able to generate a simple report, this article summarizes what needs to be done to get started with JasperReports. See Resources for information on how to find additional information and documentation about JasperReports.
Getting Started

JasperReports’ reports are defined in XML files, which by convention have an extension of jrxml. A typical jrxml file contains the following elements:

<jasperReport> – the root element.
<title> – its contents are printed only once at the beginning of the report
<pageHeader> – its contents are printed at the beginning of every page in the report.
<detail> – contains the body of the report.
<pageFooter> – its contents are printed at the bottom of every page in the report.
<band> – defines a report section, all of the above elements contain a band element as its only child element.

All of the elements are optional, except for the root jasperReport element. Here is an example jrxml file that will generate a simple report displaying the string “Hello World!”
<?xml version=”1.0″?>
<!DOCTYPE jasperReport
PUBLIC “-//JasperReports//DTD Report Design//EN”
http://jasperreports.sourceforge.net/dtds/jasperreport.dtd”&gt;

<jasperReport name=”Simple_Report”>
<detail>
<band height=”20″>
<staticText>
<reportElement x=”180″ y=”0″ width=”200″ height=”20″/>
<text><![CDATA[Hello World!]]></text>
</staticText>
</band>
</detail>
</jasperReport>

For this simple example, I omitted the optional <title>, <pageHeader> and <pageFooter> elements. The <staticText> element, unsurprisingly, displays static text on the report, as can be seen, it contains a single <text> element defining the text that will be displayed.

jrxml files need to be “compiled” into a binary format that is specific to JasperReports, this can be achieved by calling the compileReport() method on the net.sf.jasperreports.engine.JasperCompileManager class. There are several overloaded versions of this method, in our example, we will use the one that takes a single String parameter, consult the JasperReport documentation for details on the other versions of the method.

public class JasperReportsIntro
{
public static void main(String[] args)
{
JasperReport jasperReport;
JasperPrint jasperPrint;
try
{
jasperReport = JasperCompileManager.compileReport(
“reports/jasperreports_demo.jrxml”);
jasperPrint = JasperFillManager.fillReport(
jasperReport, new HashMap(), new JREmptyDataSource());
JasperExportManager.exportReportToPdfFile(
jasperPrint, “reports/simple_report.pdf”);
}
catch (JRException e)
{
e.printStackTrace();
}
}
}

A jrxml file needs to be compiled only once, but for this simple example it is compiled every time the application is executed. Before a report can be generated, it needs to be “filled” with data, this is achieved by calling the fillReport() method on the net.sf.jasperreports.engine.JasperFillManager class. Again, there are several overloaded versions of the fillReport() method, here we will use one that takes three parameters, an instance of net.sf.jasperreports.engine.JasperReport, a java.util.HashMap containing any parameters passed to the report, and an instance of a class implementing the net.sf.jasperreports.engine.JRDataSource interface. The line that accomplishes this in our example above is

jasperPrint = JasperFillManager.fillReport(
jasperReport, new HashMap(), new JREmptyDataSource());

Since our simple report takes no parameters, we pass an empty HashMap as the second parameter, and an instance of net.sf.jasperreports.engine.JREmptyDataSource as the third parameter. JREmptyDataSource is a convenience class included with JasperReports, it is basically a DataSource with no data.

Finally, in our example, we export the report to a PDF file, this way it can be read by Adobe Acrobat, XPDF, Evince, or any other PDF reader, the line that accomplishes this in our example is

JasperExportManager.exportReportToPdfFile(
jasperPrint, “reports/simple_report.pdf”);

The parameters are self-explanatory.
Conclusion
JasperReports is a very good and popular open source reporting engine, this guide provides enough information to get started with JasperReports. For additional documentation, please see my book JasperReports For Java Developers .

strategy for java jee development

take UI code debug and modification for example. previously, i have been tracing the log, using firebug check the URL address of the link, or better advanced using debug mode. But whatever method it is, i have been making huge effort on making sure, the 100% clear sure flow is made known. click button A —> to ActionA.class —> forward ServiceB.class —> DAOC.class –> JSPD.jsp.

it’s time consuming and effort consuming, yet, still times, there are mistakes. let’s say checking and follow every line, every detail of the code, then i finally modified A.class, D.jsp. Deploy and test, it might still wrong. Might be DAOD, need to be changed.

I just realized, after three years, knowing roughly the usual way of coding, structure, roughly architech, utilizing fiddler, click button A, button B, whatever, then check fiddler record, see what possibly what’s the action, let’s say ActionA.class, then go there, roughly check through, i would quickly decide possible problem occurrence place.

then deploy and test, it probably works.

conclusion is, now since now the “behind” coding (the why, how it works) I have roughly know, so I don’t any more spend so much time effort on tracing so detail, 100%. And utilizing fiddler, httpwatch professional, or debug mode, work is better and faster.

Incorrect lazy initialization of static field

utilizing sonar, or findbug, report above error message for this code.

public static void updateQuotation(String symbol, RawQuotation newValue) {

if (quotations == null) {
quotations = new HashMap();
}

quotations.put(symbol, newValue);
}

quotations is a static variable.

what findbug suggesting is actually about thread safety.


Incorrect lazy initialization and update of static field
This method contains an unsynchronized lazy initialization of a static field. After the field is set, the object stored into that location is further accessed. The setting of the field is visible to other threads as soon as it is set. If the futher accesses in the method that set the field serve to initialize the object, then you have a very serious multithreading bug, unless something else prevents any other thread from accessing the stored object until it is fully initialized.

set up sonar yesterday, then start running against the project

there is one critical error, claiming
Security – Array is stored directly : The user-supplied array ‘lSelectedProdList’ is stored directly.

I just went through a research, then quickly refreshed what I have been done for passing SCJP.

shallow copy of the clone, is to clone the object, with primitive variables copies, reference variables, the values of the reference variable, which is the address of the being referred object copied.

Deep copy clone, is to have another copy of the object, the reference variable referred.

see
@http://www.coders2020.com/what-is-shallow-copy-and-deep-copy-in-java


Java provides a mechanism for creating copies of objects called cloning. There are two ways to make a copy of an object called shallow copy and deep copy.
Shallow copy is a bit-wise copy of an object. A new object is created that has an exact copy of the values in the original object. If any of the fields of the object are references to other objects, just the references are copied. Thus, if the object you are copying contains references to yet other objects, a shallow copy refers to the same subobjects.
Deep copy is a complete duplicate copy of an object. If an object has references to other objects, complete new copies of those objects are also made. A deep copy generates a copy not only of the primitive values of the original object, but copies of all subobjects as well, all the way to the bottom. If you need a true, complete copy of the original object, then you will need to implement a full deep copy for the object.
Java supports shallow and deep copy with the Cloneable interface to create copies of objects. To make a clone of a Java object, you declare that an object implements Cloneable, and then provide an override of the clone method of the standard Java Object base class. Implementing Cloneable tells the java compiler that your object is Cloneable. The cloning is actually done by the clone method.”

Spring AOP

the project I am currently working on, has used Spring AOPs, for service auditing purposes.

here is the very detailed explanation on this topic, http://static.springsource.org/spring/docs/2.5.x/reference/aop.html. however, it’s too much detailed, just quickly go through and pick up the important points.

some points to note are,
1. Spring AOP support both schema-based, and aspectJ-based AOP
2. the former is by configurations purely in schema, as defining the aspect, join-point and advice, using XML.
3. the latter, AspectJ-based, is using AspectJ annotations., plus just enable aspectj-autoproxing in schema
4. to Note, choose either or, not both. which is the current situation my project having now, redundant

AOP + OOP, quite good pattern