ajax the file input

I am trying to submit multiple forms on same page at same time; with one of them being a file input.

The key is to wrap the form, using a new FormData.

  $.ajax({
    url: 'saveAttachment.do',
    type: 'POST',
    data: new FormData($('#saveAttachment')[0]),
    async: false,
    cache: false,
    contentType: false,
    processData: false,
    success: function (returndata) {
      alert(returndata);
    }
  });

subresource and prefetch

both seems would try cache first, then http request if not yet cached.

At the same time, subresource could result in duplicate http retrieval, if same resources (eg. image) initiated again from another (eg.css). Because, subresource always force to download.

While, as prefetch is low priority, since it’s supposed for next page. So it waits for current page loading finish. And prefetch could be cancelled by subresource or any other initiation of same resource.

for example, with below source code:

<!-- below would be reinitiated/requested by css later -->
<link rel="subresource" href="images/header_brs.jpg" />
<link rel="subresource" href="images/float_bar_lg.gif" />
<link rel="prefetch" href="images/grad.jpg" />
<link rel="prefetch" href="images/brs_C.jpg" />
<!-- JS for next page -->
<link rel="prefetch" href="js/summaryPage.js" />
<link rel="subresource" href="js/summaryPage.js" />

it would render as below

prefetch_brs_C_jpg

prefetch_grad_jpg

prefetch_subresource_next_page_summaryPage_js

subresource_float_bar_lg_gif

subresource_header_brs_jpg

JVM 7

http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/memleaks.html

3.1.1 Detail Message: Java heap space

The detail message Java heap space indicates that an object could not be allocated in the Java heap. This error does not necessarily imply a memory leak. The problem can be as simple as a configuration issue, where the specified heap size (or the default size, if not specified) is insufficient for the application.

In other cases, and in particular for a long-lived application, the message might be an indication that the application is unintentionally holding references to objects, and this prevents the objects from being garbage collected. This is the Java language equivalent of a memory leak. Note that APIs that are called by an application could also be unintentionally holding object references.

3.1.2 Detail Message: PermGen space

The detail message PermGen space indicates that the permanent generation is full. The permanent generation is the area of the heap where class and method objects are stored. If an application loads a very large number of classes, then the size of the permanent generation might need to be increased using the -XX:MaxPermSize option.

Interned java.lang.String objects are no longer stored in the permanent generation. The java.lang.String class maintains a pool of strings. When the intern method is invoked, the method checks the pool to see if an equal string is already in the pool. If there is, then the intern method returns it; otherwise it adds the string to the pool. In more precise terms, the java.lang.String.intern method is used to obtain the canonical representation of the string; the result is a reference to the same class instance that would be returned if that string appeared as a literal.

When this kind of error occurs, the text ClassLoader.defineClass might appear near the top of the stack trace that is printed.

aspect not working as aspectOf factory method not found

for my case, the reason being, the aspect is not waved at compile time.

it works well in Eclipse/IDE, as there is plugsin, aspectJRuntime plugin. however, when i build the war; my ant didnt call aspectjtools to weave compile the aspect.

[20-05-2014 21:47:00.957] [516102] [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/clientinquiryweb]] [ERROR] [MSC service thread 1-10]
Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener: org.springframework.beans.fa
ctory.BeanCreationException: Error creating bean with name 'pageDeco' defined in ServletContext resource [/WEB-INF/spring/applicationContext.xml]: No matchin
g factory method found: factory method 'aspectOf()'. Check that a method with the specified name exists and that it is static.
        at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:528)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java
:1015)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:911)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464)
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:384)
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:283)
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
        at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3392)
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:3850)
        at org.jboss.as.web.deployment.WebDeploymentService.start(WebDeploymentService.java:90)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)

Lock Overhead vs Lock Contention

Granularity

Before being introduced to lock granularity, one needs to understand three concepts about locks.

lock overhead: The extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage.
lock contention: This occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row.)
deadlock: The situation when each of two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.

There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.

An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.[citation needed]

point cut of domain objects created outside spring container

the @Configurable annotation is the cure, basically, its till aspect/aspectJ to detect if any instantiation of beans marked as Configurable; and put into spring container if yes.


@Configurable

 

and it’s only put into spring container after the bean is manually instantiated. So autowired stuff won’t work in this manually instantiated bean still.

And  in order to have spring beans loaded/injected into aspects

    <bean id="aspect"
   class="com.bfm.app.cim.helper.PageDecorator"
   factory-method="aspectOf"  />

 

btw, I dont like spring docs now. it becomes so bulky !
8.8.1 Using AspectJ to dependency inject domain objects with Spring

8.8.3 Configuring AspectJ aspects using Spring IoC

http://docs.spring.io/spring/docs/current/spring-framework-reference/html/aop.html

Good to refresh: log4j additivity against double logging

https://logging.apache.org/log4j/1.2/manual.html

The addAppender method adds an appender to a given logger. Each enabled logging request for a given logger will be forwarded to all the appenders in that logger as well as the appenders higher in the hierarchy. In other words, appenders are inherited additively from the logger hierarchy. For example, if a console appender is added to the root logger, then all enabled logging requests will at least print on the console. If in addition a file appender is added to a logger, say C, then enabled logging requests for C and C's children will print on a file and on the console. It is possible to override this default behavior so that appender accumulation is no longer additive by setting the additivity flag to false.

The rules governing appender additivity are summarized below.

Appender Additivity

    The output of a log statement of logger C will go to all the appenders in C and its ancestors. This is the meaning of the term "appender additivity".

    However, if an ancestor of logger C, say P, has the additivity flag set to false, then C's output will be directed to all the appenders in C and its ancestors upto and including P but not the appenders in any of the ancestors of P.

    Loggers have their additivity flag set to true by default.

in short,

<Loggername="com.foo.Bar"level="trace"additivity="false">

http://logging.apache.org/log4j/2.x/manual/configuration.html

Java SimpleDateFormat Syntax

I re-encountered this again, in tedious ways.

HH vs hh; 24-hour vs 12 hour !!

Letter Date or Time Component Presentation Examples
G Era designator Text AD
y Year Year 1996; 96
Y Week year Year 2009; 09
M Month in year Month July; Jul; 07
w Week in year Number 27
W Week in month Number 2
D Day in year Number 189
d Day in month Number 10
F Day of week in month Number 2
E Day name in week Text Tuesday; Tue
u Day number of week (1 = Monday, …, 7 = Sunday) Number 1
a Am/pm marker Text PM
H Hour in day (0-23) Number 0
k Hour in day (1-24) Number 24
K Hour in am/pm (0-11) Number 0
h Hour in am/pm (1-12) Number 12
m Minute in hour Number 30
s Second in minute Number 55
S Millisecond Number 978
z Time zone General time zone Pacific Standard Time; PST; GMT-08:00
Z Time zone RFC 822 time zone -0800
X Time zone ISO 8601 time zone -08; -0800; -08:00

http://docs.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html

Known issue with Exchange Web service TimeZone

EWS returned datetime value is actaully always UTC timezone. however, then append with server time zone. So let’s say if the email created at 15:00 SGT (+18:00); EWS would return date time as 7:00 EST; since the application server (the EWS client) resides in Australia.

To correctly parse the returned date time value, SimpleDateTimeFormat shoule be used twice.

	static ThreadLocal<SimpleDateFormat> sdT1;
	static ThreadLocal<SimpleDateFormat> sdT2;
	static {
		sdT1 = new ThreadLocal<SimpleDateFormat>(){
			@Override
			protected SimpleDateFormat initialValue() {
				return new SimpleDateFormat("dd/mm/yyyy hh:mm:ss");
			}			
		};
		
		sdT2 = new ThreadLocal<SimpleDateFormat>(){
			@Override
			protected SimpleDateFormat initialValue() {
				return new SimpleDateFormat("dd/mm/yyyy hh:mm:ss z");
			}			
		};
		 //sd = new SimpleDateFormat("dd/mm/yyyy hh:mm:ss");					
		 //sd2 = new SimpleDateFormat("dd/mm/yyyy hh:mm:ss z");
	}

		//synchronized (sdT1.get())
		//{
			String datetime = sdT1.get().format(receipt.getDateTimeReceived());
			
			try {
				exchangeCreatedTime = new BFMDateTime(sdT2.get().parse(datetime+" UTC"));
			} catch (ParseException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}finally{
				
			}
		//}

Wired error using Sharepoint web service for uploading files

even though the WSDL says most element are optional. however, it does throw out various WIRED exceptions, if any of the attribute not provided.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://schemas.microsoft.com/sharepoint/soap/">
   <soapenv:Header/>
   <soapenv:Body>
      <soap:CopyIntoItems>
         <!--Optional:-->
         <soap:SourceUrl>?</soap:SourceUrl>
         <!--Optional:-->
         <soap:DestinationUrls>
            <!--Zero or more repetitions:-->
            <soap:string>?</soap:string>
         </soap:DestinationUrls>
         <!--Optional:-->
         <soap:Fields>
            <!--Zero or more repetitions:-->
            <soap:FieldInformation Type="?" DisplayName="?" InternalName="?" Id="?" Value="?"/>
         </soap:Fields>
         <!--Optional:-->
         <soap:Stream>cid:266527284695</soap:Stream>
      </soap:CopyIntoItems>
   </soapenv:Body>
</soapenv:Envelope>

1. if Fields –> FieldInformation not provided, ErrorCode=”Unknown” ErrorMessage=”Object reference not set to an instance of an object.”
2. if stream not provided, ErrorCode=”Unknown” ErrorMessage=”Value does not fall within the expected range.”
3. if stream uing cid: xxx, HTTP 400 bad request
4. if sourceUrl not provided, ErrorCode=”Unknown” ErrorMessage=”Value does not fall within the expected range.”
5. if destinationUrls not provided, Exception of type ‘Microsoft.SharePoint.SoapServer.SoapServerException’ was thrown. Object reference not set to an instance of an object

all above exceptions could be due to Sharepoint server configuration, as WSDL does say the element could be optional.

the right request works:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://schemas.microsoft.com/sharepoint/soap/">
   <soapenv:Header/>
   <soapenv:Body>
      <soap:CopyIntoItems>
         <!--Optional:-->
                  <!--Optional:-->
                  <soap:SourceUrl>test90.txt</soap:SourceUrl>
         <soap:DestinationUrls>
            <!--Zero or more repetitions:-->
            <soap:string>https://xxx/2014/1/28/16/26979799/test90.txt</soap:string>
         </soap:DestinationUrls>

         <!--Optional:-->
         <soap:Fields>
            <!--Zero or more repetitions:-->
            <soap:FieldInformation Type="Text" DisplayName="test40.txt" InternalName="test40.txt" Id="12345678-1234-1234-1234-123456789012" Value="test40.txt"/>
         </soap:Fields>

         <!--Optional:-->
         <soap:Stream>1234546</soap:Stream>

      </soap:CopyIntoItems>
   </soapenv:Body>
</soapenv:Envelope>

And it works fine with Fiddler, Java Client; Fiddler, SoapUI. And it turns out nothing specific configuration needed for Fiddler.

Quick Note for Maven repository settings

1. user name/password is not compulsory for proxy settings

2. for local repository, URL should be like this

</p><p>          &lt;repository&gt;<br />          &lt;id&gt;local&lt;/id&gt;<br />          &lt;url&gt;file:///h:/apps/xp/.m2/repository/&lt;/url&gt;<br />        &lt;/repository&gt;</p><p>

</p><p>&lt;localRepository&gt;H:\apps\xp\.m2\repository&lt;/localRepository&gt;</p><p>

Good to know, OpenSSH on Windows Server as FTP, provide link to share folder on another Windows Machine

http://technet.microsoft.com/en-us/library/cc784499(v=ws.10).aspx

we encountered a wired issue, for a painful time. The windows 2008 machine A, set up as FTP server using OpenSSH. There is one folder on this FTP server, which is actually share folder on another windows Machine B.

Windows team then enabled all permissions almost everywhere, myself (users/everybody group) can read/write into the share folder directly from any machine. The FTP account can SCP/PSCP any file to the share folder onto Machine A. Wired thing is, the FTP account, if connect through SFTP, it can only read/delete/update files into that share folder, but never able to write new files.

the exception encountered, when uploading file is

sftp> put test.txt test4
Uploading test.txt to //xx/dfs/xx/xx/xx/xx/xx/test.txt
Couldn't get handle: Permission denied

 

Then finally windows team found above link,

ShareFolderPermission

and enabled both below Share and NTFS permission.

NTFSPermission

SharePermission

Jboss 7 class loading for JCE provider bouncy castle

I was helping somebody on encrypting the database connection from a Jboss 7 web application.

the recommended JCE provider is bouncy castle, however, this jboss7 class loading issue should apply to any other JCE provider jar as well.

the exception is

JZ0LA: Failed to instantiate Cipher object. Transformation RSA/NONE/
OAEPWithSHA1
AndMGF1Padding is not implemented by any of the loaded JCE providers.

when using spring jdbc connection or anything alike, for example

	<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
		destroy-method="close" p:driverClassName="com.sybase.jdbc3.jdbc.SybDriver"
		p:url="${db.url}" p:username="${db.username}"
		p:password="${db.password}" />

the db.url, would be alike

jdbc:sybase:Tds:server:4100/datavase?ENCRYPT_PASSWORD=true&JCE_PROVIDER_CLASS=org.bouncycastle.jce.provider.BouncyCastleProvider 

Above exception basically says, the boucy castle jar is not on classpath.

The jar, is however, already put into
application.war
—–WEB-INF
–lib
–bcprov-jdk1.6-1.4.6.jar

above settings would work in Jboss 5. However, it would fail in Jboss 7. The reason being, while Jboss 5 using hierarchy class loading, I guess it starts from the WAS class loader first, which successfully load the bouncy castle.
However, Jboss 7 is using module class loading now, other than implicity dependecies like rt.jar, javax.security etc, other dependecies, as you define it in jboss-deployment-structure.xml, else you cannot access it.

And for jboss 7, the JCE_provider attribute was passed to Jdbc3.SybDriver, however, was being called/looked for by Jboss class loader, not the war class loader.

The resolution to resolve above is, either put bc.jar as a module, as physically pointed to as a resource.
Solution 1.


<module xmlns="urn:jboss:module:1.1" name="org.bouncycastle">

    <resources>
		<resource-root path="bcprov-jdk16-1.46.jar"/>
        <!-- Insert resources here -->
    </resources>
	
	<dependencies>
    	<module name="javax.api" slot="main" export="true"/>
	</dependencies>
		

</module>
		<dependencies>
			<module name="org.osgi.core" />
			<module name="com.sun.crypto.provider" slot="main" export="true"/>  
			<module name="org.bouncycastle" slot="main" export="true"/>  
		</dependencies>
	</deployment>
</jboss-deployment-structure>

Solution 2. Not use-physical-code-source=”true” is compulsory.

 		<resources>
 			<resource-root path="WEB-INF/lib/bcprov-jdk16-1.46.jar" use-physical-code-source="true"/>
 		</resources>
	</deployment>
</jboss-deployment-structure>

Refer to https://community.jboss.org/thread/175395

https://docs.jboss.org/author/display/AS7/Class+Loading+in+AS7

FTP directories listing for local and domain acocunts

Just spotted, when accessing FTP using local and domain accounts, would result in different directories listing. For example:
sftp user@server would list differently than, sftp ‘domain\user@server’.

Good explanation here:

Account Type 	Advantages 	Disadvantages
Domain User Accounts

(Used with FTP's Basic Authentication) 	Domain accounts allow for easier access and auditing for domain resources, like content on UNC shares. 	Domain accounts obviously have more access to your network's resources.

For example, if the accounts are part of the local "Domain Users" group, they have access to everything that the group has access to.
Local User Accounts

(Used with FTP's Basic Authentication) 	Local accounts are generally better than domain accounts when you are trying to limit access to your domain, and you can still audit their activity using Windows auditing. 	Local user accounts may still have access to local system resources.

For example, if the accounts are part of the local "Users" group, they have access to everything that the group has access to.

Refer to http://blogs.msdn.com/b/robert_mcmurray/archive/2010/03/05/using-ftp-with-different-account-types.aspx

HTTPS secure for JBoss web application

Quick steps
1. generate key store file:

%JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA

if the keystore generated is certified by CA (external/internal), browser display warning: the certificate is not signed by trusted CA.
if the keystore was generated by another domain/server, browser would warn: the certificate was for another domain

2. put the keystore file into jboss\server\default\conf

3. enable SSL connection part in server.xml, and refer to the keystore

      <!-- SSL/TLS Connector configuration using the admin devl guide keystore -->
      <Connector port="18443" address="${jboss.bind.address}"
           maxThreads="100" strategy="ms" maxHttpHeaderSize="8192"
           emptySessionPath="true"
           scheme="https" secure="true" clientAuth="false" 
           keystoreFile="${jboss.server.home.dir}/conf/sydneyweb.keystore"
           keystorePass="password" sslProtocol = "TLS" />

refer to: http://docs.jboss.org/jbossweb/latest/ssl-howto.html

http://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch9.https.sect.html

http://en.wikipedia.org/wiki/HTTP_Secure#Network_layers

Database Encyrption using JCE provider

I was pulled to help others to solve issues for database encryption projects. One project is Jaguar manager plug in for Sybase Central, the other is a JBoss 7 web application built by myself.

I will write the problem and solution for the jboss 7 application in another post.

Background:

the database server would force connections using encrypted strings. For Sydbase JDBC driver, there are two properties to set
1. encrypt_password = true
2. JCE_provider = (eg.org.bouncycastle.jce.provider.BouncyCastleProvider)

refer to http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc20155.1500/html/newfesd/newfesd95.htm

general instructions on how to use JCE library, http://www.jasypt.org/non-default-providers.html
1. put the library on $JRE_HOME/lib/ext
2. enable the provider in java.security file

however, for Sybase central v3, based on JDK 1.4, it keeps throws below exception:

JZ0LA: Failed to instantiate Cipher object. Transformation RSA/NONE/
OAEPWithSHA1
AndMGF1Padding is not implemented by any of the loaded JCE providers.

according to Sybase, this basically means, the JCE provider jar is not class path. refer to: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc39001.0707/html/prjdbc0707/CHDGJJIG.htm.

https://groups.google.com/forum/#!topic/sybase.public.jconnect/FD0XHvdVV6I

however, the weired stuff about sybase central v3 is that, it needs bouncycastle jce provider jdk1.4 jar bcprov-jdk1.4.jar plus, it needs jce-jdk1.3.jar, which replaced the default JDK1.4 jce jars.

The 2nd jar, the jce-jdk1.3.jar, provided by BC to override the JDK jce jar, caused me three hard days to figure out. And it’s from this page:

“Choose Your Cryptographic Provider
Sun’s JDK ships with a small set of cryptographic implementations and, in fact, doesn’t provide any asymmetric algorithms, like the industry-dominant RSA algorithms. In fact, many Java cryptology experts recommend avoiding Sun’s JCE provider altogether because once the Sun provider is loaded, it prevents the use of other providers (see Professional Java Security by Jess Garms and Daniel Somerfield for more details). ”

“I fired off several e-mails to Sybase engineers, but with the holiday break I hadn’t received a response prior to my submission deadline as to why this extra .jar might be necessary. ”
http://java.sys-con.com/node/106821/print

Powerbuild Application with JCE

Jboss JAAS Kerberos LDAP security

I encountered some issue while moving one old Jboss web application, from win 2000 to windows 2008 machine.

the first exception encountered is: field is too long

javax.naming.AuthenticationException: GSSAPI [Root exception is javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Field is too long for this implementation (61))]]

this was due to the krb5.conf, the transport is restricted to UDP, refer to
http://docs.oracle.com/cd/E19253-01/816-4557/trouble-1/index.html

 Field is too long for this implementation
Cause:

The message size that was being sent by a Kerberized application was too long. This error could be generated if the transport protocol is UDP. which has a default maximum message size 65535 bytes. In addition, there are limits on individual fields within a protocol message that is sent by the Kerberos service.
Solution:

Verify that you have not restricted the transport to UDP in the KDC server's /etc/krb5/kdc.conf file.

krb5.conf

 

udp_preference_limit = 1 should be added to krb5.conf

[libdefaults]
        default_realm = NA.BLKINT.COM
 udp_preference_limit = 1

the 2nd exception is:Mechanism level: The ticket isn’t for us

17:25:05,531 INFO  [LDAPRealm] No entry in cache for IP, will go fetch DCs w/ subj: AUPMVAPP025/45.145.68.150
17:25:10,048 INFO  [LDAPRealm] Using site: Melbourne
17:25:10,048 INFO  [LDAPRealm] Using Domain Controller: [aupmscdc001.na.blkint.com.]
17:25:10,688 WARN  [JAASRealm] Login exception authenticating username "shenmuk"
javax.security.auth.login.LoginException: Could not establish a connection with AD: java.lang.RuntimeException: Couldn't talk to LDAP: java.lang.RuntimeException: javax.naming.AuthenticationException: GSSAPI [Root exception is javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: The ticket isn't for us (35))]]
	at com.bglobal.commons.security.ldap.NoAuthLDAPLoginModule.commit(NoAuthLDAPLoginModule.java:50)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769)
	at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)
	at javax.security.auth.login.LoginContext$4.run(LoginContext.java:683)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
	at javax.security.auth.login.LoginContext.login(LoginContext.java:580)
	at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:373)
	at org.apache.catalina.authenticator.FormAuthenticator.authenticate(FormAuthenticator.java:256)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:391)
	at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:59)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
	at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
	at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744)
	at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
	at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112)
	at java.lang.Thread.run(Thread.java:595)

the cause is, refer to https://forums.oracle.com/thread/1527734, http://docs.oracle.com/cd/E19253-01/816-4557/trouble-1/index.html

Possible Cause and Resolution

o The server has received a ticket that was meant for a different realm.

Resolution

Verify that DNS is set up correctly. Verify that packets are correctly routed across the network.

the krb5.conf needed to be updated, as there is a SPACE, after realms ”
[realms]
NA.BLKINT.COM = {
kdc = dc-na-ewd (SPACE_HERE)
}”

or change to another realm
[realms]
NA.BLKINT.COM = {
#### kdc = dc-na-ewd
kdc = dir-ad
}

Overall, how JBoss JASS Kerberos/LDAP security works:
1. jboss-web.xml: point to the domain

<jboss-web>
	<security-domain flushOnSessionInvalidation="true">java:/jaas/BAM</security-domain>

2. auth.conf: configure the domain, use which log in module

BAM {
  com.sun.security.auth.module.Krb5LoginModule required useTicketCache=false;
  com.bglobal.commons.security.ldap.NoAuthLDAPLoginModule required;
};

2. and context.xml


  <Context>
      <Realm className="org.apache.catalina.realm.JAASRealm"                 
                appName="BAM"         
               roleClassNames="com.bglobal.commons.security.ldap.LDAPGroup"
               userClassNames="com.bglobal.commons.security.identities.BGIUserId"
                      debug="99"/>
</Context>
      
    

3. web.xml: configure which roles for which access

<login-config>
<security-constraint>

refer to: https://github.com/zanata/zanata-server/wiki/JAAS-Authentication
http://www.kerberos.org/software/tutorial.html
https://community.jboss.org/wiki/DRAFTUsingJBossNegotiationOnAS7

Spring Hibernate “Closed Statement / ResultSet”

I was encountering this exception, complaining about “cannot operate on closed statement or resultset”.

I have a domainClass map to one table. The domain class use an ID class for composite identity. The domain class also use another class to resolve one property.

<hibernate-mapping>
	<class name="AbstractHibernatePredicate" table="jql_mapping" lazy="false" mutable="true" discriminator-value="not null">
		<cache usage="read-only"/>
    	<composite-id name="jqlMappingID" class="com.bfm.predicate.JqlMappingID" >
			<key-property name="purpose" column="purpose" type="&TrimmedString;" />
			<key-property name="evalOrder" column="eval_order" />
    	</composite-id>
    	
		<property name="xxxx" column="xxxx" type="&TrimmedString;" />
		<subclass name="ConcreteHibernatePredicate" discriminator-value="not null">
            <property name="expression"  column="jql" type="SomeExpressionType"/>
        </subclass>
	</class>
</hibernate-mapping>

SomeExpressionType is org.hibernate.usertype.CompositeUserType. It overrides nullsafeget

    public Object nullSafeGet(ResultSet rs, String[] names, SessionImplementor session, Object owner) throws HibernateException, SQLException {
        String expr = null;
        try {
        	String prog = resolveProgram(rs);
            if (rs.isFirst() || macroFactory.get() == null || !macroFactory.get().containsKey(prog)) {
                @SuppressWarnings("unchecked")
                List<xx> xx= session.getNamedQuery("byProgram").list();
                 ....
            }
            return getParsedExpression(rs.getString(names[0]),prog);
        } catch (Exception e) {
            log.warn("Exception while parsing BQLExpression. Failed expression: " + expr + ". Error " + e.getMessage());
            return new BQLExpression("1=2");
        }
    }

within above method, it use same session to invoke another query and execute. This results the existing resutlSet detached, because this configuration:

<prop key="hibernate.connection.release_mode">after_transaction</prop>

or

<prop key="hibernate.connection.release_mode">after_statement</prop>

the only way to work is

<!--prop key="hibernate.connection.release_mode">after_statement</prop-->

or

<prop key="hibernate.connection.release_mode">on_close</prop>

http://docs.jboss.org/hibernate/orm/3.3/reference/en-US/html/session-configuration.html

sybase concurrent process/multi-threading

Encountered a multi-threading issue on this stored procedure

CREATE PROCEDURE dbo.spc_get_next_sequence_no
 		(@codedesc			varchar(30),
 		 @next_id			integer OUTPUT)
AS

		select @next_id = next_id from next_number where code_desc = @codedesc

        update next_number set next_id = next_id + 1 where code_desc = @codedesc
        

RETURN 0

table next_number is a record of sequence number for different purpose

code_desc	next_id
receipting_number	732215
receipting_batch_id	2521
conso_tax_extract	230
conso_dist_extract	79
Manual Journal	8933
receipt_number	101891
complaint_num	7
complaint_record_no	128

(I know we should use Sybase sequence, or create table with identity. But this is a legacy database, which I have no control.)

the issue was, there are more than one thread comes in and both threads get the same next_id,

the solutions are two parts:
1. make the SP atomic (begin transaction.. commit transaction)
2. lock the SP (to make it synchronized; update statement first plus the begin and commit transaction)

refer to:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.sqlug/html/sqlug/sqlug843.htm

CREATE PROCEDURE dbo.spc_get_next_sequence_no
 		(@codedesc			varchar(30),
 		 @next_id			integer OUTPUT)
AS
BEGIN TRANSACTION
        update next_number set next_id = next_id + 1 where code_desc = @codedesc
        
		select @next_id = next_id from next_number where code_desc = @codedesc
COMMIT TRANSACTION

RETURN 0

JSP Class file not generated on JRun server

Another team encounter this problem. For updated JSP, it’s been translated and generated into java file. However, the class file not built.

it seems i have been some time away from JSP, or just not been thinking clearly on this issue before jumping in.

i was trying to regenerate the jsp block by block, which finally identified the error block. Without this block, it would compile into classes. So obviously it’s a coding issue.

and it’s actually quite an obvious error:



<%= dataModel.formValidation.getJSObjectCode() ; %>


Quick JDK Default XPath XML parser


import javax.xml.transform.stream.StreamResult;
import javax.xml.transform.stream.StreamSource;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;
import org.w3c.dom.Document;
import org.xml.sax.InputSource;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;


	@Test
	public void quickParseCBISResposne() {

		XPath xpath = XPathFactory.newInstance().newXPath();
		String response = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
				+ "<ApplicationTransactionId xmlns=\"http://www.infocomp.com/cbis/messages/Application/1.0\">"
				+ "<transactionId>6426072</transactionId>"
				+ "<allPriced>false</allPriced>"
				+ "</ApplicationTransactionId>";

		Document document = null;
		try {
			DocumentBuilder builder = DocumentBuilderFactory.newInstance()
					.newDocumentBuilder();
			document = builder
					.parse(new InputSource(new StringReader(response)));
		} catch (Exception e) {
			// throw new
			// RuntimeException("The UK New Cash response message could not be parsed. \n\n"
			// + message, e);
		}

		try {
			xpath.evaluate("//transactionId", document);
//xpath.evaluate("//*[local-name() = 'transactionId']", document);//this would ignore name space
		} catch (XPathExpressionException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}

	}

Quick note if log is not working in apache camel

the reason simply because apache camel is using slf4j, http://camel.apache.org/log.html.

the slf4j library thus need to be on the classpath.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class AnnotationTypeConverterLoader implements TypeConverterLoader {
    public static final String META_INF_SERVICES = "META-INF/services/org/apache/camel/TypeConverter";
    private static final transient Logger LOG = LoggerFactory.getLogger(AnnotationTypeConverterLoader.class);
...
}

3045 [main] INFO org.apache.camel.impl.converter.AnnotationTypeConverterLoader - Found 3 packages with 15 @Converter classes to load
3065 [main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Loaded 169 core type converters (total 169 type converters)
3070 [main] INFO org.apache.camel.impl.converter.AnnotationTypeConverterLoader - Loaded 2 @Converter classes

the library could be either slf4j-simple.jar or slf4j-log4j.jar:
http://www.slf4j.org/manual.html

http url connection for spring web service template

there is always a “NOT FOUND [404]” issue when parsing the response from web service, or sometimes even when sending the web service; when using spring web service template.

the problem then turns out due to the default http url connection provided by JVM. When i change the connnection to spring provided common http message sender based on Jakata common httpClien, it starts working well.

By the way, JVM provided http url connection sender works well on windows machine. Only when i deploy the application to UNIX, it starts complaining the 404 issue.

anyway, with the spring common http message sender, it works well whether windows or UNIX.

Another thing to note is, when using Spring Web Service template. We just need to pass the SOA message content to webServiceTemplate.sendSourceAndReceiveToResult(). as spring web service template will wrap it using the SOAP envelop.

if we provide envelop again, it also throws 404 issues.

http://static.springsource.org/spring-ws/site/reference/html/client.html

Jboss 7 classpath and external properties

C:\jboss-as-7.1.1.Final\bin>standalone.bat -P=C:\jboss-as-7.1.1.Final\modules\xxxxx\xxxx.properties

-P or –properties, would load the specified properties into jboss system environment

See boot.log with above properties:

	12:31:56,005 INFO  [org.jboss.modules] JBoss Modules version 1.1.1.GA
	12:31:56,137 INFO  [org.jboss.msc] JBoss MSC version 1.0.2.GA
	12:31:56,168 INFO  [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting
	12:31:56,170 DEBUG [org.jboss.as.config] Configured system properties:
		FPW.details.sql = SELECT  xxxxxx
		awt.toolkit = sun.awt.windows.WToolkit
		camel.context.startonboot = true
		camel.scan.class.resolver = org.apache.camel.jboss.JBossPackageScanClassResolver

		java.net.preferIPv4Stack = true
		java.runtime.name = Java(TM) SE Runtime Environment
		java.runtime.version = 1.6.0_31-b05
		java.specification.name = Java Platform API Specification
		java.specification.vendor = Sun Microsystems Inc.
		java.specification.version = 1.6
		java.util.logging.manager = org.jboss.logmanager.LogManager
		java.vendor = Sun Microsystems Inc.
		java.vendor.url = http://java.sun.com/
		java.vendor.url.bug = http://java.sun.com/cgi-bin/bugreport.cgi
		java.version = 1.6.0_31
		java.vm.info = mixed mode
		java.vm.name = Java HotSpot(TM) 64-Bit Server VM
		java.vm.specification.name = Java Virtual Machine Specification
		java.vm.specification.vendor = Sun Microsystems Inc.
		java.vm.specification.version = 1.0
		java.vm.vendor = Sun Microsystems Inc.
		java.vm.version = 20.6-b01
		javax.management.builder.initial = org.jboss.as.jmx.PluggableMBeanServerBuilder
		javax.xml.datatype.DatatypeFactory = __redirected.__DatatypeFactory
		javax.xml.parsers.DocumentBuilderFactory = __redirected.__DocumentBuilderFactory
		javax.xml.parsers.SAXParserFactory = __redirected.__SAXParserFactory
		javax.xml.stream.XMLEventFactory = __redirected.__XMLEventFactory
		javax.xml.stream.XMLInputFactory = __redirected.__XMLInputFactory
		javax.xml.stream.XMLOutputFactory = __redirected.__XMLOutputFactory
		javax.xml.transform.TransformerFactory = __redirected.__TransformerFactory
		javax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema = __redirected.__SchemaFactory
		javax.xml.xpath.XPathFactory:http://java.sun.com/jaxp/xpath/dom = __redirected.__XPathFactory
		jboss.home.dir = C:\jboss-as-7.1.1.Final

	12:31:56,231 DEBUG [org.jboss.as.config] VM Arguments: -XX:+TieredCompilation -Dprogram.name=standalone.bat -Xms64M -Xmx512M -XX:MaxPermSize=256M -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djboss.server.default.config=standalone.xml -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n -Dorg.jboss.boot.log.file=C:\jboss-as-7.1.1.Final\standalone\log\boot.log -Dlogging.configuration=file:C:\jboss-as-7.1.1.Final\standalone/configuration/logging.properties
	12:31:56,874 INFO  [org.xnio] XNIO Version 3.0.3.GA
	12:31:56,874 INFO  [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http)
	12:31:56,882 INFO  [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA
	12:31:56,888 INFO  [org.jboss.remoting] JBoss Remoting version 3.2.3.GA
	12:31:56,956 INFO  [org.jboss.as.security] JBAS013101: Activating Security Subsystem
	12:31:56,969 INFO  [org.jboss.as.webservices] JBAS015537: Activating WebServices Extension
	12:31:56,983 INFO  [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers
	12:31:56,983 INFO  [org.jboss.as.connector] JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final)
	12:31:56,996 INFO  [org.jboss.as.security] JBAS013100: Current PicketBox version=4.0.7.Final

Above properties are loaded into system properties and were from composerinterfacesmgr.properties.

However, if using above parameter alone, it still wont put the properties into classpath, so below spring application context configuration wont work:

<bean class="com.bfm.app.jmim.conf.Configurer">
	<property name="locations">
		<list>
			<value>classpath:xxxx.properties</value>
		</list>
	</property>
</bean>

To resolve, as for current, the only solution is to create new modules, and put the properties inside the module, and specify to depend on this module in jboss-deployment-structure.xml: https://community.jboss.org/wiki/HowToPutAnExternalFileInTheClasspath

<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
	<deployment>
		<exclusions>
			<module name="org.apache.log4j" />
			<module name="org.slf4j" />
			<module name="org.slf4j.impl" />
			<module name="org.slf4j.ext" />
            <module name="org.slf4j.jcl-over-slf4j" />
			<module name="org.apache.commons.logging" />
			<module name="org.jboss.logging.jul-to-slf4j-stub" />
			<module name="org.jboss.logging" />
			<module name="org.jboss.as.logging" />
		</exclusions>
		<dependencies>
			<module name="org.osgi.core" />
			<module name="com.xxxx.xxxx" />
		</dependencies>
<!-- 		<resources> -->
<!-- 			<resource-root path="xxxx.properties" /> -->
<!-- 		</resources> -->
	</deployment>
</jboss-deployment-structure>

With above two together, the application would work. If only create a module, and put the properties into classpath, it might not work either. For example, log4j.xml,


    <appender name="emailAppender" class="org.apache.log4j.net.SMTPAppender">
	<param name="Threshold" value="DEBUG"/>
	<param name="To" value="${smtp.error.recipient}" />
    <param name="BufferSize" value="1" />
	<layout class="org.apache.log4j.PatternLayout">
		<param name="ConversionPattern" value="%d %-5p [%t] %c{2} (%F:%L) - %m%n" />
	</layout>
</appender>

Above requires a system property or at least the classloader of log4j aware of. So the first change, specify the -P / –properties needed to put that properties into system environment.

Good concise explanation on jvm memeory

As we know objects are created inside heap memory and Garbage collection is a process which removes dead objects from Java Heap space and returns memory back to Heap in Java. For the sake of Garbage collection Heap is divided into three main regions named as New Generation, Old or Tenured Generation and Perm space. New Generation of Java Heap is part of Java Heap memory where newly created object are stored, During the course of application many objects created and died but those remain live they got moved to Old or Tenured Generation by Java Garbage collector thread on Major or full garbage collection. Perm space of Java Heap is where JVM stores Meta data about classes and methods, String pool and Class level details. You can see How Garbage collection works in Java for more information on Heap in Java and Garbage collection.

Log4j email appender is default to ERROR level

I have been keeping wondering why my email appender is not working. The log4j xsd might be better if they can provide feedback on malformed configuration. Combine Threshold or logger level with SMTPAppender, without throwing exception is really confusing. While the only way to change logging level for SMTPAppender is actually implement and provide TriggeringEventEvaluator.

http://stackoverflow.com/questions/3544269/log4j-smtp-digest-aggregate-emails

By default, an email message will be sent when an ERROR or higher severity message is appended. The triggering criteria can be modified by setting the evaluatorClass property with the name of a class implementing TriggeringEventEvaluator, setting the evaluator property with an instance of TriggeringEventEvaluator or nesting a triggeringPolicy element where the specified class implements TriggeringEventEvaluator. 

while the xsd never warns the combination of threshold and levels with SMTPAppender,is confusing:

	<appender name="emailAppender" class="org.apache.log4j.net.SMTPAppender">
		<param name="SMTPHost" value="mailhost.core.blackrock.com" />
		<param name="Threshold" value="DEBUG"/>

	<logger name="com.bfm.app.xxxx">
		<level value="info" />
		<appender-ref ref="emailAppender" />

Known bug between JAXB and Camel

https://issues.apache.org/jira/browse/CAMEL-4955:

https://java.net/jira/browse/JAXB-860;

basically, if using Jaxb-api; jaxb-impl version 2.2.x, and camel version before 2.10 (not yet released), it might on-off/intermittent throwing NPE exception, while parsing camel-context xml.

The NPE is about com/sun/xml/bind/v2/runtime/ClassBeanInfoImpl.checkOverrideProperties()

I have encountered above exception, on off when deploying a web application, onto Jboss 7.1.1. Because default jaxb module in jboss 7, is
C:\jboss-as-7.1.1.Final\modules\javax\xml\bind\api\main\jboss-jaxb-api_2.2_spec-1.0.3.Final

put the old versioned jaxb library into application library folder, seems solved the problem for now. I might need to exclude the jboss jaxb module in jboss-deployment-structure.xml if needed later.

some references

https://community.jboss.org/message/739232


Interesting.
It works now, I've just rolled back JAXB Version inside JBOSS.
 
If anyone has the same problem, the solution for JBOSS 7.1.1 FInal is:
 
Go into $JBOSS_HOME/modules/com/sun/xml/bind/main
delete all files (jaxb-impl-2.2.4.jar, jaxb-xjc-2.2.4.jar etc, 5 files)
 
The simplest way is to take the same module, version 2.2, from JBOSS 7.0.1, it has jaxb 2.2, just copy all 5 files from
jboss_7.0.1/modules/com/sun/xml/bind/main/*
to
jboss_7.1.1.Final/modules/com/sun/xml/bind/main/
 
I am using camel 2.9.2 again, no snapshot version (or 2.10) necessary.
 
I've started and stopped it again at least 10 times, it's working. Now I understand, why I did not get such a trouble with jboss 7.0.1.
 
it could be also helpful:
http://java.net/jira/browse/JAXB-860

Pasted from <https://community.jboss.org/message/739232> 


===============================================================

Reading between the lines I think you have encountered an incompatibility between Apache Camel and JAXB 2.2 (which is part of the standard JEE6 spec and also incorporated into Java 7).
 
It looks like the Camel folks are actively working on a fix for their 2.10.0 release.

Pasted from <https://community.jboss.org/message/739232> 

================================================================

https://issues.apache.org/jira/browse/CAMEL-4955
Due to a bug of JAXB RI included in JDK 7, (see http://java.net/jira/browse/JAXB-860)
NPE raised while creating CamelContext in camel-spring. I't a show-stopper for everyone using camel-spring with JDK 7.
related mailing list thread: http://mail-archives.apache.org/mod_mbox/camel-users/201108.mbox/%3CCAJ_S2Sn+Y692R48yrYBZoB66Pz1YPC6H-i=ZozBFPa=G54D78w@mail.gmail.com%3E


Pasted from <https://issues.apache.org/jira/browse/CAMEL-4955> 

Java class loading again jboss XML Apis

XML-Apis.jar and others like xerces.jar, jaxb.jar etc almost always cause problem during deployment. The solution is almost always continue using Jdk or container provided library, instead of duplicating these inside application ear or war.

If mandatory to deploy own specified version of above library, endorsed library or jboss-class-loading XML, or jboss-deployment-structure.xml should be configured.

apache camel csv or excel parser

default camel csv parsing method, wont keep the header row. http://camel.apache.org/csv.html

from("direct:start").
    marshal().csv().
    to("mock:result");

it would convert the collection into list of comma separated rows, however, the header of the row is not kept.

something like

630205,1111598,1,41848.040000
630210,1111603,1,8726.240000

to keep the header, one solution is to use the camel bindy.

		DataFormat bindy = new
				 BindyCsvDataFormat("com.blk.autoTest.bindy");
	 

	    .beanRef("COMDailyProdReportService","process")//use this service class to get the result set

	    //set the body as list of map ?
	    //the header would be lost after marshal
	    .marshal(bindy)
	    //.csv()
	    .beanRef("COMDailyProdReportService","check")

and the bindy class, specify: (generateHeaderColumns=true)

@CsvRecord(separator ="," , crlf = "UNIX", skipFirstLine = false, generateHeaderColumns=true)
public class COMProdOrders {
}

then results:

orderId,transactionId,direction,amount
630205,1111598,1,41848.040000
630210,1111603,1,8726.240000

camel timer

somehow, camel timer dosen’t work with Spring XML application context. If using camel.spring.Main, it works.

//		new ClassPathXmlApplicationContext(
//				"COMDailyProdReportContext.xml");
		
		
		org.apache.camel.spring.Main.main("-applicationContext", "COMDailyProdReportContext.xml");
from("timer://com/blk/autoTest/COMOrderReport?delay=100&repeatCount=1")

automation project for retrieving, parsing XML and output to file and Email


	    from("timer:swiftTest?repeatCount=1")//timer, auto start, only repeat once
			.beanRef("SWIFTTestService","process")//retrieve some messages #
			.split(body())//split 4 message # into 4 runs, no parrallel processing
				    .beanRef("SWIFTTestService","retrieveMsg") //retrieve the message through org.openqa.selenium.WebDriver
				    .to("xslt:swiftTransformer.xsl") //XSLT, transform the XML		    
				    .beanRef("SWIFTTestService","aggregateMsg") //aggregate the parsed XMLs, and put those into instance variable, for passing to Email
				    .to("file:output?fileName=SwiftMessages-${header.messageId}.xml")//write the parsed XML into output folder
				    .log("gcom swift msg result: "+body().toString())				    
		    .end()	    		
			    .beanRef("SWIFTTestService","emailMsg")//Email the aggregated parsed XML as attachments
	        	.log("gcom swift msg result: "+body().toString())

Retrieve the message through org.openqa.selenium.WebDriver

   @PostConstruct
   public void webDriverInit() {
	
	   
		driver.get("https://website"); //

		// Find the text input element by its name
		WebElement username = driver.findElement(By.name("xxxxxxx_username"));
		username.clear();
		username.sendKeys(user);
		

		WebElement password = driver.findElement(By.name("xxxxx_password"));
		password.clear();
		password.sendKeys(pwd);
		
		
		driver.findElement(By.name("Logon")).click();

	   
		parsedGXML = new ArrayList&lt;String&gt;();
		
		
   }

 public String retrieveMsg(Exchange exchange) throws InterruptedException{

	
	  	driver.get("https://xxxxxxxxxxxx"); 
  	
		
		//publisher:clear		
		WebElement publisher = driver.findElement(By.name("xxx"));
		//publisher.clear();
		publisher.sendKeys("xxxxx");



driver.findElement(By.xpath("//input[@type='submit' and @value='Search']")).click();


		(new WebDriverWait(driver, 60)).until(new ExpectedCondition&lt;Boolean&gt;() {
			public Boolean apply(WebDriver d) {
				return !driver.findElements(By.xpath("//a[text()='xxxxxxxx']")).isEmpty();
			}
		});


driver.findElement(By.xpath("//a[text()='xxxxxxx']")).click();


String swiftMsg = driver.findElement(By.xpath("//form[@id='message' and @method='post']/pre")).getText();
}

Email using javax.mail package.

		Multipart multipart = new MimeMultipart();
		multipart.addBodyPart(messageBodyPart);
		
		int i=0;
		for(String transformedSwiftMsg: transformedSwiftMsgs) {
			i++;
			BodyPart attachmentBodyPart = new MimeBodyPart();
			
		    DataSource ds = new ByteArrayDataSource(transformedSwiftMsg.getBytes("UTF-8"), "application/xml");
	
		    attachmentBodyPart.setDataHandler(new DataHandler(ds));
		    attachmentBodyPart.setFileName("SwiftMsg-"+i+".xml");
		    
			multipart.addBodyPart(attachmentBodyPart);
		}

		msg.setContent(multipart,"text/html");

Sybase IDENTITY

http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.sqlug/html/sqlug/sqlug299.htm

Instead of SEQUENCE like Oracle, Sybase has something equivalent, called IDENTIY.

By default, Adaptive Server begins numbering rows with the value 1, and continues numbering rows consecutively as they are added. Some activities, such as manual insertions, deletions, or transaction rollbacks, and server shutdowns or failures, can create gaps in IDENTITY column values. Adaptive Server provides several methods of controlling identity gaps described in “Managing identity gaps in tables”.

Pasted from

update GCOMElectronicOrder set TransactionID = 30000 where TransactionID = 21503

putty for server auto log in

PuTTYgen (puttygen.exe)
Then to automate SSH login, do the following:

Run PuTTYgen.
Select SSH-2 DSA as the Type of Key to generate.
Click generate and move your mouse around to generate randomness.
Click “Save Private Key” and save it somewhere on your computer.
Copy the entire content inside the box to your clipboard (this is your generated public key).
Login to your SSH server.
Create the file ~/.ssh/authorized_keys containing the generated public key (from step 3) on a single line.
Make this file readable (chmod 755).
Then open up PuTTY and navigate to Connection->Data and fill in the auto-login username.
Navigate to Connection->SSH->Auth and under Private-key, browse to the file you had saved earlier on your computer.

Use puTTY to automatically login a SSH session

an inner member of an outer-join clause. This is not allowed if the table also participates in a regular join clause.

This exception is almost killing me today !!

an inner member of an outer-join clause. This is not allowed if the table also participates in a regular join clause.

after try and tests and resolves, wiki helps me..

Alternatives

The effect of an outer join can also be obtained using a UNION ALL between an INNER JOIN and a SELECT of the rows in the "main" table that do not fulfill the join condition. For example
SELECT employee.LastName, employee.DepartmentID, department.DepartmentName
FROM employee
LEFT OUTER JOIN department ON employee.DepartmentID = department.DepartmentID;
can also be written as
SELECT employee.LastName, employee.DepartmentID, department.DepartmentName
FROM employee
INNER JOIN department ON employee.DepartmentID = department.DepartmentID
 
UNION ALL
 
SELECT employee.LastName, employee.DepartmentID, CAST(NULL AS VARCHAR(20))
FROM employee
WHERE NOT EXISTS (
    SELECT * FROM department
             WHERE employee.DepartmentID = department.DepartmentID)

looks easy, turning back, maybe i should have became calmer and more relax, which should be all the case !!

so simply, overall, use less outer join !!!!! when must there is outer join, cannot mix regular join there.

http://en.wikipedia.org/wiki/Join_(SQL)

Java IOException Not enough space

Nice article on explaining how to resolve java not enough space exception. Just encountered this exception during the day

$ ./S84got-oo-pra1 start [property] java.io.IOException: Not enough space [property] at java.lang.UNIXProcess.forkAndExec(Native Method) [property] at java.lang.UNIXProcess.<init>(UNIXProcess.java:53) [property] at java.lang.ProcessImpl.start(ProcessImpl.java:65) [property] at java.lang.ProcessBuilder.start(ProcessBuilder.java:451) [property] at java.lang.Runtime.exec(Runtime.java:591) [property] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [property] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [property] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [property] at java.lang.reflect.Method.invoke(Method.java:585) [property] at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:828) [property] at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:445) [property] at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:459) [property] at org.apache.tools.ant.taskdefs.Execute.getProcEnvironment(Execute.java:165) [property] at org.apache.tools.ant.taskdefs.Property.loadEnvironment(Property.java:526) [property] at org.apache.tools.ant.taskdefs.Property.execute(Property.java:403) [property] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [property] at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) [property] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [property] at java.lang.reflect.Method.invoke(Method.java:585) [property] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [property] at org.apache.tools.ant.Task.perform(Task.java:348) [property] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:62) [property] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) [property] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [property] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [property] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [property] at java.lang.reflect.Method.invoke(Method.java:585) [property] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [property] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:134) [property] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [property] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [property] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [property] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [property] at java.lang.reflect.Method.invoke(Method.java:585) [property] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [property] at org.apache.tools.ant.Task.perform(Task.java:348) [property] at org.apache.tools.ant.Target.execute(Target.java:357) [property] at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:140) [property] at org.apache.tools.ant.ProjectHelper.configureProject(ProjectHelper.java:96) [property] at org.apache.tools.ant.Main.runBuild(Main.java:683) [property] at org.apache.tools.ant.Main.startAnt(Main.java:199) [property] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257) [property] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)  BUILD FAILED /apps/gmmt/bea/weblogic922/bin/init-1.9/init.xml:51: The following error occurred while executing this line: /apps/gmmt/bea/weblogic922/bin/init-1.9/common-macros.xml:233: Please export env. variable START_DIR and rerun! usage working directory when starting the server

adding more swap space solved the issue

 /usr/sbin/mkfile 1g /path/to/swapfile /usr/bin/swap -a /path/to/swapfile /usr/bin/swap -l

http://wiki.hudson-ci.org/display/HUDSON/IOException+Not+enough+space

java email

		Properties props = new Properties();
		props.put("mail.host",mailServer);
		props.put("mail.transport.protocol", "smtp");

		// create some properties and get the default Session
		Session session = Session.getDefaultInstance(props, null);
		session.setDebug(debug);

		// create a message
		Message msg = new MimeMessage(session);

		// set the from and to address
		InternetAddress addressFrom = new InternetAddress(from);
		msg.setFrom(addressFrom);

		InternetAddress[] addressTo = new InternetAddress[recipients.length];
		for (int i = 0; i < recipients.length; i++) {
			addressTo[i] = new InternetAddress(recipients[i]);
		}
		msg.setRecipients(Message.RecipientType.TO, addressTo);

		// Optional : You can also set your custom headers in the Email if you
		// Want
		msg.addHeader("MyHeaderName", "myHeaderValue");

		
		// Create the message part 
		BodyPart messageBodyPart = new MimeBodyPart();

		// Fill the message
		if((Integer)results.get("recentRun") > 0) {
			msg.setSubject(MessageFormat.format(subject, ""));	
			//MessageFormat.format(subject, "is running");//dynamic properties placeholder ??
			messageBodyPart.setContent(MessageFormat.format((String) message,"",SYD.format(results.get("latestRun"))),"text/html");
		}
		else {
			msg.setSubject(MessageFormat.format(subject, "NOT "));
			//MessageFormat.format(subject, "is not running");//dynamic properties placeholder ??
			messageBodyPart.setContent(MessageFormat.format((String) message,"<b><FONT COLOR='RED'>NOT</FONT></b> ",SYD.format(results.get("latestRun"))),"text/html");
		}
		
		Multipart multipart = new MimeMultipart();
		multipart.addBodyPart(messageBodyPart);

/*		
 * // Part two is attachment
		messageBodyPart = new MimeBodyPart();
		//DataSource source = new FileDataSource("C:\\OOLCheck\\OOLReport.pdf");
		DataSource source = new FileDataSource("OOLReport.pdf");
		messageBodyPart.setDataHandler(new DataHandler(source));
		messageBodyPart.setFileName("OOLReport.pdf");

		multipart.addBodyPart(messageBodyPart);
		
		messageBodyPart = new MimeBodyPart();
		//DataSource source2 = new FileDataSource("C:\\OOLCheck\\OOLReport_OEReport.pdf");
		DataSource source2 = new FileDataSource("OOLReport_OEReport.pdf");
		messageBodyPart.setDataHandler(new DataHandler(source2));
		messageBodyPart.setFileName("OOLReport_OEReport.pdf");

		multipart.addBodyPart(messageBodyPart);

*/		// Put parts in message
		msg.setContent(multipart,"text/html");
		
		try {
			Transport.send(msg);
			}catch(MessagingException e){
			}

spring dynamic properties

Nice post: http://amitsavm.blogspot.com/2012/03/dynamic-placeholder-substitution-in.html#comment-form

Basically, to use java.text.MessageFormat to replace the placeholder runtime

GET_TABLE_COUNT = select count(1) from {0}

@SuppressWarnings(“all”)
    public static String getProperty(final String key, final String[] arguments){
        return MessageFormat.format(prop.get(key), arguments);
    }

good explanation on broken pipe exception

A pipe is a data stream, typically data being read from a file or from a network socket. A broken pipe occurs when this pipe is suddenly closed from the other end. For a flie, this could be if the file is mounted on a disc or a remote network which has become disconnected. For a network socket, it could be if the network gets unplugged or the process on the other end crashes.

In Java, there is no BrokenPipeException specifically. This type of error will be found wrapped in a different exception, such as a SocketException or IOException.

from: http://stackoverflow.com/questions/3751661/what-is-the-meaning-of-broken-pipe-exception

for spring bean initiation, either default constructor or auto wired

great to confirm this, spring need either default constructor, or auto wired constructor

refer to http://stackoverflow.com/questions/9296849/spring-mvc-no-default-constructor-found

default constructor is common, then for example auto wired constructor:

@Service(value = "objectToMapConverter")
public class ObjectToMapConverter {
  
	
  private static final SimpleDateFormat SDF = new SimpleDateFormat("yyyy-MM-dd");
  private static final SimpleDateFormat SYD = new SimpleDateFormat("yyyy-MM-dd");
  static {
    SDF.setTimeZone(TimeZone.getTimeZone("Europe/London"));
    SYD.setTimeZone(TimeZone.getTimeZone("Australia/Sydney"));
  }
  
  private static ClientTransactionDao txDao;
  
  @Autowired
  public ObjectToMapConverter(ClientTransactionDao clientTransactionDao) {
	// TODO Auto-generated constructor stub
	  txDao = clientTransactionDao;
  }

detailed explanation of JSR on finalize method

After the finalize method has been invoked for an object, no further action is taken until the Java virtual machine has again determined that there is no longer any means by which this object can be accessed by any thread that has not yet died, including possible actions by other objects or classes which are ready to be finalized, at which point the object may be discarded.

The finalize method is never invoked more than once by a Java virtual machine for any given object.

[Theading] concurrency issue again…..

This time is about database processing. I put three DB access queries in single transactional method as

//first query is to insert data, identiy of the table is a sequence, would //auto-increment as such
		Query saveTransmission = getEntityManager()
				.createNativeQuery(	"insert into Transmission  (transmitTypeCode, transmitStatusCode, source) values (?1, 'GENERATED','GCOM') ");
		
		saveTransmission.setParameter(1, transmitTypeCode);
		
		BigDecimal  transmitId = (BigDecimal) saveTransmission.getSingleResult();

//second query, supposed to retrieve same identy, just now inserted, //which should be the largest value
//		Query getTransmissionId = getEntityManager()
//				.createNativeQuery(	"select max(transmitId) from Transmission ");
//
//		BigDecimal transmitId = (BigDecimal)getTransmissionId.getSingleResult();
//		
//		log.debug("inserted new transmission: "+transmitId+": "+iter);
		
//third query is to use this identy, for another table
		Query saveTransmissionFragment = getEntityManager()
				.createNativeQuery(	"insert into TransmissionFragment (transactionId, revisionNumber, transmitId, fragmentTransactionStatusCode) " +
						"values (?1, ?2, ?3, ?4)");

then yesterady, one exceptions happens. the root cause is, before 2nd query retrieve the maximum value; processes on the 2nd server (“clustering”) runs and increment the identy of the Tranmission Table.

Fixes is very easy, trying to combine first and second query together.

		Query saveTransmission = getEntityManager()
				.createNativeQuery(	"insert into Transmission  (transmitTypeCode, transmitStatusCode, source) values (?1, 'GENERATED','GCOM') select @@identity ");
		
		saveTransmission.setParameter(1, transmitTypeCode);
		
		BigDecimal  transmitId = (BigDecimal) saveTransmission.getSingleResult();
//		
//		Query getTransmissionId = getEntityManager()
//				.createNativeQuery(	"select max(transmitId) from Transmission ");
//
//		BigDecimal transmitId = (BigDecimal)getTransmissionId.getSingleResult();
//		
//		log.debug("inserted new transmission: "+transmitId+": "+iter);
		
		Query saveTransmissionFragment = getEntityManager()
				.createNativeQuery(	"insert into TransmissionFragment (transactionId, revisionNumber, transmitId, fragmentTransactionStatusCode) " +
						"values (?1, ?2, ?3, ?4)");

Note: I think there is also one thing I missed out, which is about the spring @Transactional annotation.

All above queries are put into one @transactional method, which means either complete all-or-fail. However, it doesnt mean LOCKING THE TABLE TILL ALL FINISHED.

Anyway, i guess need to always pay attention, and think about threading (concurrency) possiblity. (there is at least 2 threads in production clustering.)

Camel csv bindy, splitter and aggregator

samples

		from(sourceFtpUrl)//probably do archiving here??
			.transacted("PROPAGATION_REQUIRED")
				.onCompletion()
				.onCompleteOnly()
				.log(LoggingLevel.INFO,
						getClass().getName(),
						"this part is consuming one csv file of many rows of records")
				.end()
				.unmarshal(bindy)//this is part to unmarshal to list of objects
				.split(body(), new SomeAggregator())//split the list, pass each bean to someBeanToProcessEachObject, then Aggregarte using SomeAggregator to aggregate each new object into old list
					.beanRef("someBeanToProcessEachObject", "process")
				.end()
				.split(body()).parallelProcessing()
						.beanRef("someProcess", "process")//Split again, which now each object is an already aggregrated object
				.end()
				.log(LoggingLevel.INFO, getClass().getName(),
						"All Processed");

another is: java thread safe regarding SimpleDateFormat

We have an OMS component using camel listen and poll MQ. To enhance the performance, I increased the thread listening to the queue, from


    from(""GShareMQ:queue:Q.US_OE.IN.TRANSACTIONSCHEDULE?acknowledgementModeName=CLIENT_ACKNOWLEDGE")
        // keep the JMS message for Client acknowledge on success
        .beanRef("jmsMessageUtil", "addMessageToHeader")
        .convertBodyTo(String.class)
        .log(LoggingLevel.INFO, "Processing SSB Execution:\n${in.body}")

        .beanRef("transactionScheduleParser", "parseTransactionScheduleMessage")

to

 from(""GShareMQ:queue:Q.US_OE.IN.TRANSACTIONSCHEDULE?acknowledgementModeName=CLIENT_ACKNOWLEDGE&concurrentConsumers=10")
        // keep the JMS message for Client acknowledge on success
        .beanRef("jmsMessageUtil", "addMessageToHeader")
        .convertBodyTo(String.class)
        .log(LoggingLevel.INFO, "Processing SSB Execution:\n${in.body}")

        .beanRef("transactionScheduleParser", "parseTransactionScheduleMessage")

then on off, out of few thousand messages, 20 got some exception as


java.lang.NumberFormatException: multiple points
	at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1084)
	at java.lang.Double.parseDouble(Double.java:510)
	at java.text.DigitList.getDouble(DigitList.java:151)
	at java.text.DecimalFormat.parse(DecimalFormat.java:1303)
	at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1936)
	at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
	at java.text.DateFormat.parse(DateFormat.java:335)
	at com.bfm.cpm.parser.TransactionScheduleParser.parseTransactionScheduleMessage(TransactionScheduleParser.java:145)

or

java.lang.NumberFormatException: For input string: ""
	at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
	at java.lang.Long.parseLong(Long.java:431)
	at java.lang.Long.parseLong(Long.java:468)
	at java.text.DigitList.getLong(DigitList.java:177)
	at java.text.DecimalFormat.parse(DecimalFormat.java:1298)
	at java.text.SimpleDateFormat.subParse(SimpleDateFormat.java:1591)
	at java.text.SimpleDateFormat.parse(SimpleDateFormat.java:1312)
	at java.text.DateFormat.parse(DateFormat.java:335)
	at com.bfm.cpm.parser.TransactionScheduleParser.parseTransactionScheduleMessage(TransactionScheduleParser.java:137)

line 137 and 145 are

        Date prvDt = prvDateFmt.parse(prvDateStr);
Date contrSettlementdate = prvDateFmt.parse(contrSettlementdateStr);

and prvDateFmt is instance variable, which is not thread safe.
http://stackoverflow.com/questions/6840803/simpledateformat-thread-safety

private SimpleDateFormat prvDateFmt = new SimpleDateFormat("yyyy-MM-dd");

When cpm parsing the message, it uses transactionScheduleParserBean as a singleton (Spring), which uses one same instance of SimpleDateFormat, and it is not threadSafe.

Date prvDt = prvDateFmt.parse(prvDateStr);
Date contrSettlementdate = prvDateFmt.parse(contrSettlementdateStr);

============================================updated on Aug 15, 2012
Solution:

I am to using the ThreadLocal class, which would provide one instance per thread.Think this should work.

here instead of using instance of SimpleDateFormat, I am to use instance of ThreadLocal as instance variable of the singleton spring bean, TransactionScheduleParser.

  private ThreadLocal<SimpleDateFormat> postingDateFmt = new ThreadLocal<SimpleDateFormat>(){
	
	  @Override
	protected SimpleDateFormat initialValue() {
		// TODO Auto-generated method stub
	       SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-ddHH:mm:ss");
		   df.setTimeZone(omsTz);
		   return df;
	}
  };
  
  private ThreadLocal<SimpleDateFormat> prvDateFmt = new ThreadLocal<SimpleDateFormat>(){
		
	  @Override
	protected SimpleDateFormat initialValue() {
		// TODO Auto-generated method stub
       SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd");
	   df.setTimeZone(omsTz);
	   return df;
	}
  };

while within the method, which is per thread level, instead of using the instance variable of type SimpleDateFormat, I am to use instance variable of type ThreadLocal to get one SimpleDateFormat per thread.

Date contrSettlementdate = prvDateFmt.get().parse(contrSettlementdateStr);
Date actualSettlementdate = prvDateFmt.get().parse(actualSettlementdateStr);

Addon: How ThreadLocal works:
Basically, each instance of type ThreadLocal would maintain one hashMap of Thread(threadId?) and the instance of ClassType. Invoking prvDateFmt.get() would return the instance of SimpleDateFormat corresponding to this thread.

seems been sometime away from Java: calendar date difference

I was using this

if( (new Date().getTime() - tradeDate.getTime()) > 1000 * 60 *60 * 24 * 30){
..
}

for making sure trade date not older than 3 months, which then exactly encountered the round off problem mentioned here,
http://tripoverit.blogspot.com/2007/07/java-calculate-difference-between-two.html.

should instead using calendar.before()

    java.util.Date tradeDate = ct.getActionDate();
	Date now = new Date();
	Calendar threeMonBef = Calendar.getInstance();threeMonBef.setTime(now);
	threeMonBef.add(Calendar.MONTH, -3);
	
	Calendar cal1 = Calendar.getInstance();cal1.setTime(tradeDate);

if( cal1.before(threeMonBef)){
..
}

==================================================updated Aug 15, 2012
//only date information should be maintained, instead of datetime, to avoid problem due to timezone difference.

    /**
     * Get previous Business Day of the current system date.
     * Note that this method is heavy because it calls isBusinessDay, which required Database query, multiple time.
     * So, this should be called when it is really required.
     * @param channelCtx
     * @return
     */
    private Date getPreviousBusDay(ChannelContext channelCtx) {


        // yes. I believe that GOCM v1 works only for Tokyo.
        Calendar nowInTokyo = Calendar.getInstance(TimeZone.getTimeZone("Asia/Tokyo"));
        // for testing the situation that Date part is different between Tokyo and GMT.
//        nowInTokyo.set(Calendar.AM_PM, Calendar.AM);
//        nowInTokyo.set(Calendar.HOUR, 7);
//        nowInTokyo.set(Calendar.MINUTE, 30);

        // copy the tokyo date to GMT/PDT calender. truncate datetime part.
        // This is required to get correct result from isBusinessDay method when prev calender date is Holiday.
//        Calendar prevBusDay = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
        // weblogic server in US/ewd uses PDT , local dev may use different
        Calendar prevBusDay = Calendar.getInstance(); //use the timeZone of the host
        prevBusDay.set(Calendar.DATE, nowInTokyo.get(Calendar.DATE));
        prevBusDay.set(Calendar.AM_PM, Calendar.AM);
        prevBusDay.set(Calendar.HOUR, 0);
        prevBusDay.set(Calendar.MINUTE, 0);
        prevBusDay.set(Calendar.SECOND, 0);
        prevBusDay.set(Calendar.MILLISECOND, 0);

        prevBusDay.add(Calendar.DATE, -1);
        while(!isBusinessDay(prevBusDay.getTime(), channelCtx)){
            prevBusDay.add(Calendar.DATE, -1);
        }
        return prevBusDay.getTime();
    }

camel again, csv bindy and file2 EIP

camel as a good EIP, best as far as i know, while i really only know few.

I am working on csv bindy recently for a project, to consume FTP csv file, and parse/marshal to java object then publish to MQ. camel is making EIP very very easy.

Just one point to put, which not so easy to locate through tons of internet information we googled:
crlf (the carriage return) is default to windows, and I think we need to change it to Unix for unix environment application.( Even though I guess it might work in either environment configuration. )

To make things slightly complex, if we wanna use camel default EIP functions, move, preMove and errorMove to handle archiving, inProgress and error handling, then be careful of the file name if you are going to change it.

For example, if we plan to change the file name, by appending the timestamp, make sure we don’t change the file extension. instead of

moveFailed=/error/${file:name}.${date:now:yyyyMMddHHmmssSSS} 

better use,

moveFailed=/error/${file:name.noext}-${date:now:yyyyMMddHHmmssSSS}.${file:ext}

.
thats what exactly happen for me, which camel always throw “No records found in CSV file” exception, as I have put

skipFirstLine= true

, and seems camel then confused because of file extension, and can’t recognize the crlf (carriage return).

http://camel.apache.org/file2.html

http://camel.apache.org/bindy.html

Class loader classloader

Java class loader, classloader! Recent production deployment encountered class cast exception, might be due to class loader again.

I will update once verified. Java class loader, got to crack it.

And found myself too focus, tuned to get tired, make me tired to think. Note this!

=================================================================================

Here is the exception encountered,which failed the deployment:

ClassCastException encountered during gcom v3 production release.

2012-06-09 05:08:20,659 INFO [main] spring.ServiceBean - Exposing service with name {http://www.bglobal.com/services/FASService}FASService
2012-06-09 05:08:20,669 INFO [main] ldap.LdapInstanceDescriptorPersistenceHelper - Services environment is OU=PROD
2012-06-09 05:08:20,672 WARN [main] ldap.LdapInstanceDescriptorPersistenceHelper - Could not configure the Realm
javax.naming.NoInitialContextException: Cannot instantiate class: com.sun.jndi.dns.DnsContextFactory. Root exception is
java.lang.ClassCastException: com.sun.jndi.dns.DnsContextFactory cannot be cast to javax.naming.spi.InitialContextFactory
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:659)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:250)
at javax.naming.InitialContext.init(InitialContext.java:226)
at javax.naming.InitialContext.<init>(InitialContext.java:202)
at javax.naming.directory.InitialDirContext.<init>(InitialDirContext.java:87)
at com.bglobal.reuse.serviceframework.directory.Realm.<init>(Realm.java:74)
at com.bglobal.reuse.serviceframework.discovery.ldap.LdapInstanceDescriptorPersistenceHelper.<init>(LdapInstanceDescriptorPersistenceHelper.java:131)
at com.bglobal.reuse.serviceframework.discovery.ldap.LdapInstanceDescriptorPersistenceHelper.<init>(LdapInstanceDescriptorPersistenceHelper.java:102)
at com.bglobal.reuse.serviceframework.discovery.DefaultServiceLocator.<init>(DefaultServiceLocator.java:27)
at com.bglobal.reuse.serviceframework.discovery.Registry.<init>(Registry.java:14)
at com.bglobal.reuse.xfire.extensions.server.listeners.ServiceRegistrationEventListener.getRegistry(ServiceRegistrationEventListener.java:174)
at com.bglobal.reuse.xfire.extensions.server.listeners.ServiceRegistrationEventListener.configureDiscovery(ServiceRegistrationEventListener.java:100)
at com.bglobal.reuse.xfire.extensions.server.listeners.ServiceRegistrationEventListener.endpointRegistered(ServiceRegistrationEventListener.java:53)
at org.codehaus.xfire.service.DefaultServiceRegistry.register(DefaultServiceRegistry.java:58)
at org.codehaus.xfire.spring.ServiceBean.afterPropertiesSet(ServiceBean.java:226)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1477)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3910)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4393)
at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310)
at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142)
at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461)
at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118)
at org.jboss.web.deployers.WebModule.start(WebModule.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157)
at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96)
at org.jboss.mx.server.Invocation.invoke(Invocation.java:88)
at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264)
at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668)
at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206)
at $Proxy38.start(Unknown Source)
at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42)
at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37)
at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62)
at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71)
at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51)
at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348)
at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286)
at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631)
at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934)
at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082)
at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984)
at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822)
at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553)
at org.jboss.system.ServiceController.doChange(ServiceController.java:688)
at org.jboss.system.ServiceController.start(ServiceController.java:460)
at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163)
at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99)
at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46)
at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62)
at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50)
at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171)
at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439)
at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157)
at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178)
at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1210)
at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098)
at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348)
at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631)
at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934)
at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082)
at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984)
at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822)
at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553)
at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781)
at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702)
at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117)
at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70)
at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53)
at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361)
at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348)
at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631)
at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934)
at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082)
at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984)
at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822)
at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553)
at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306)
at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271)
at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461)
at org.jboss.Main.boot(Main.java:221)
at org.jboss.Main$1.run(Main.java:556)
at java.lang.Thread.run(Thread.java:619) 

Reason for this:
Team member put some redundant library in the package, which cause diffferent class loader for duplicated class, ultimately caused this ClassCast Exception, which failed the deployment.

      <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>jndi</artifactId>
           <version>${mq.version}</version>
       </dependency>
       <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>jta</artifactId>
           <version>${mq.version}</version>
       </dependency>
       <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>ldap</artifactId>
           <version>${mq.version}</version>
       </dependency>
       <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>postcard</artifactId>
           <version>${mq.version}</version>
       </dependency>
       <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>providerutil</artifactId>
           <version>${mq.version}</version>
       </dependency>
       <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>rmm</artifactId>
           <version>${mq.version}</version>
       </dependency> 

The duplicate javax.naming.spi.InitialContextFactory is also in this library, which could caused different classloader for two same class.

      <dependency>
           <groupId>com.ibm.mq</groupId>
           <artifactId>jndi</artifactId>
           <version>${mq.version}</version>
       </dependency>

Readings:
https://community.jboss.org/wiki/JBossClassLoaderIntroduction

http://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/Class_Loading_and_Types_in_Java-ClassCastExceptions___Im_Not_Your_Type.html

Java Date timezone conversion

First method to convert from Date to UTC string

  private String dateToUtcString(Date convert){
	    Calendar now = Calendar.getInstance(TimeZone.getTimeZone("UTC"));
	    now.setTime(convert);
	    try {
	      return DatatypeFactory.newInstance().newXMLGregorianCalendar((GregorianCalendar) now).toString();
	    } catch (DatatypeConfigurationException e) {
	      throw new RuntimeException(e);
	    }
	  }

Second method assume/force set passed in date as UTC, then convert to PST timezone

private String dateToPstString(Date convert) {
		String value = convert.toString();

		DateFormat df1 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
		df1.setTimeZone(TimeZone.getTimeZone("UTC"));

		// Parses the value and assumes it represents a date and time in the EST
		// timezone
		try {
			Date utcDate = df1.parse(value);// database result is UTC time
			Calendar pstDate = Calendar.getInstance(TimeZone
					.getTimeZone("GMT-8:00"));// always -8, as CART hardcoded
												// +17
			pstDate.setTime(utcDate);

			return DatatypeFactory.newInstance()
					.newXMLGregorianCalendar((GregorianCalendar) pstDate)
					.toString();
		} catch (DatatypeConfigurationException e) {
			throw new RuntimeException(e);

		} catch (ParseException e1) {
			// TODO Auto-generated catch block
			e1.printStackTrace();
		}

		return value;
	}

Good to know MQ confirguation

http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/index.jsp?topic=%2Fcom.ibm.mq.csqzaw.doc%2Fuj19900_.htm

http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/topic/com.ibm.mq.csqzav.doc/un11010_.htm

when MQQueueManager try to access the queue, with certain options, lets say

MQOO_SET

on security queues, you might get authorisation failure.

openOptions
Options that control the opening of the queue. Valid options are:

MQC.MQOO_ALTERNATE_USER_AUTHORITY
Validate with the specified user identifier.
MQC.MQOO_BIND_AS_QDEF
Use default binding for queue.
MQC.MQOO_BIND_NOT_FIXED
Do not bind to a specific destination.
MQC.MQOO_BIND_ON_OPEN
Bind handle to destination when queue is opened.
MQC.MQOO_BROWSE
Open to browse message.
MQC.MQOO_FAIL_IF_QUIESCING
Fail if the queue manager is quiescing.
MQC.MQOO_INPUT_AS_Q_DEF
Open to get messages using queue-defined default.
MQC.MQOO_INPUT_SHARED
Open to get messages with shared access.
MQC.MQOO_INPUT_EXCLUSIVE
Open to get messages with exclusive access.
MQC.MQOO_INQUIRE
Open for inquiry – required if you want to query properties.
MQC.MQOO_OUTPUT
Open to put messages.
MQC.MQOO_PASS_ALL_CONTEXT
Allow all context to be passed.
MQC.MQOO_PASS_IDENTITY_CONTEXT
Allow identity context to be passed.
MQC.MQOO_SAVE_ALL_CONTEXT
Save context when message retrieved*.
MQC.MQOO_SET
Open to set attributes —required if you want to set properties.
MQC.MQOO_SET_ALL_CONTEXT
Allows all context to be set.
MQC.MQOO_SET_IDENTITY_CONTEXT
Allows identity context to be set.

Nice wiki on Sybase SQL debugger

from http://www.petersap.nl/SybaseWiki/index.php?title=SQL-Debugger_-_DBA_usage

Sybase's sql-debugger (sqldbgr) is a tool included with the ASE installation and can be used to debug stored procedures and triggers. This might give you the impression that this tool is mainly for developers and not for DBA's. However, the debugger also allows you to attach to an already running process (another spid), examine or change local variables of that other session and even inspect their temporary tables. For production environments the tool provides just that extra information where Monitoring Tables fall short.

==Example of SQL code that needs to be debugged==
As an example the following code is created into a database. Please note that for readability things like transaction handling, error checking and compilation requirements have been left out. Description of functionality is given below.
 create procedure myProc @a int, @b int
 as
 
 declare @c int
 
 select @c = c_value
 from myTable
 where a_value = @a
 
 select d_value
 into #work
 from myOtherTable
 where b_value = @b
 
 execute myOtherProc @c
 go

 create procedure myOtherProc @c int
 as
 
 select *
 from BigTable
 join #work on (BigTable.d_value = #work.d_value)
 where c_column = @c
 go

When the myProc procedure is executed it retrieves a value from a table, populates a worktable and executes the stored procedure myOtherProc. The stored procedure myOtherProc selects data from the worktable #work and a very big table. Once this procedure is running it is impossible to determine the content of the temporary table at run-time (or a cumbersome procedure must be followed). Any inspection of local variables is not possible. Now this is where SQL-debugger kicks in.

==Starting the debugger==
You can start the debugger like this:
 $SYBASE/$SYBASE_ASE/bin/sqldbgr -U <username> -P <password> -S <host:portnumber>
The parameters username and password speak for themselves, host:portnumber should be substituted with the hostname (or IP-address) and the portnumber where your ASE server is listening on.

Make sure to put a space between the command switch and the argument, so put a space after -U, -P and the -S. Check that the SYBASE_JRE or JAVA_HOME environment variable has been set and points to a Java runtime environment.

Once you are logged on a prompt will be shown:
 (sqldbg)
To exit the debugger, just type 'quit'

==Attaching to a session==
To connect to another session you must know its spid. Within the debugger you can execute normal sql commands like sp_who, but you must put "mysql" in front of the statement.
Remember that you are working with a debugger, so the output of e.g. sp_who is not very nicely formatted and you can only type your command on a single line. No "go" is needed to execute the command within the debugger. Just press 'Enter'.
 (sqldbg) mysql sp_who
When you have determined the spid, you can connect to the session with the attach command:
 (sqldbg) attach <spid>
 (sqldbg)
When you get "You cannot debug a task that is not owned by you" the session is using another username than you. See below how to resolve this.
With the detach command you can detach from an attached session.
 (sqldbg) detach <spid>

==Retrieving the call stack==
When you have been attached successfully to the other spid you can see the call-stack from that session with the "where" command
 (sqldbg) where
 (dbo.myOtherProc::5::@c = 2)
 (dbo.myProc::17::@a = 1,@b = 3)
 (ADHOC::1::null)
 (sqldbg)
From this we can learn quite a lot:
* At the first line (dbo.myOtherProc::5::@c = 2) it is shown that the procedure dbo.myOtherProc is currently running, it is at line 5 within that stored procedure and a value of 2 has been passed into the @c parameter. Since this is the first line from the "where" output, the level is numbered as 0.
* At the second line (dbo.myProc::17::@a = 1,@b = 3) the procedure dbo.myProc is shown. This means that myProc called myOtherProc at line 17. Two variables were passed to the myProc procedure. Now we are at level 1.
* Finally, (ADHOC::1::null) tells us that the myProc procedure was called from a command line tool (like isql / SQL-Advantage), at line 1 of the batch. This is level 2.

==Viewing local variables==
Local variables can be viewed with the "show variables" command.
 (sqldbg) show variables
 (sqldbg)
In this example no output is shown. This is because "show variables" works default at level 0 and a distinction is made between variables declared within the stored procedure, and parameters that were declared in the "create procedure" statement. Indeed, no local variables were declared within the myOtherProc stored procedure and therefore "show variables" shows nothing.
 (sqldbg) show variables at level 1
 int @c 2
 (sqldbg)
With "show variables at level 1" we can actually see the declared local variables within the myProc procedure, their datatype and value. In this case @c was declared as an int and a value of 2 has been put into it.

Note: Global variables cannot be retrieved for an attached session. For instance, when the running procedure has changed the transaction isolation level this will be reflected in the @@isolation global variable. This change will not be seen within the debugger that has been attached to that particular session.

==Viewing temporary tables==
To view temporary tables use the "sql" command (not the mysql command), like this:
 (sqldbg) sql select * from #work
 d_value
 4
 (sqldbg)
Now we know that the attached session created a temporary table with just one row. The column d_value has a value of 4.
To select data from a temporary table you must know the name of the table as it is created within the stored procedure. In practice this means that you should have access to the source code of the stored procedure. Alternatively you can run a select on sysobjects in the temporary database and query the name column as in this example:
 select name from tempdb..sysobjects
The output will show the first few characters of the name of the table, followed by the spid.

There is a small problem when the attached session is using a temporary database and that database is not accessible by you. Such a situation can occur when the login/application of the attached session is bound to a specific tempdb. In that case you need to impersonate the other login with the 'setuser' command.

==Modifying local variables or temporary tables==
Local variables (as viewed with the "show variables" command) can be modified with the "set" command. This is only possible for variables at level 0. Example:
 (sqldbg) set @z = 2
Temporary tables can be modified with the "sql" command. Example:
 (sqldbg) sql delete from #work where d_value = 3
Although you are attached to a session this does not mean that you have taken over that session. Any locks set by the session will also affect the debugger. When a temporary table is locked exclusively it cannot be modified through the debugger. Selection of data is always possible, so these locks can indeed by bypassed.

==Resolving "You cannot debug a task that is not owned by you"==
When you try to attach to a session and the error "You cannot debug a task that is not owned by you" is raised, the session is running with another username than yourself. By default you can only attach to a session running with the same username as yourself.
Using the T-sql command "set session authorization" you can impersonate another user and then use the attach command. Before you can execute "set session authorization" you should have been granted privilege to it, even when you have already sa_role or sso_role. To get this privilege, a dba should add you to the master database as a user and then execute "grant set proxy to <your-username>" (preferably with the ‘restricted’ option). Improper usage of "grant set proxy to" can introduce security issues so please read and understand the Sybase documentation with regard to this before using it.
When all requirements have been met you can execute "set session authorization" within the debugger. Example:
 (sqldbg) attach 18
 You cannot debug a task that is not owned by you
 (sqldbg) mysql set session authorization 'joe'
 (sqldbg) attach 18
 (sqldbg)

==Further reading==
Sybase documentation for sqldbgr is here http://infocenter.sybase.com/help/topic/com.sybase.help.ase_15.0.utility/html/utility/utility216.htm

Sybase documentation for "grant" statement is here http://infocenter.sybase.com/help/topic/com.sybase.help.ase_15.0.commands/html/commands/commands59.htm

[[Category:ASE]]

PL/SQL exception handling

Working on our US project recently, which is actually one-tier, simply db access/manipulation using huge chunk of SP.

Exception handling become extremely importantly, which happens missed in our US SP.

Refer to http://plsql-tutorial.com/plsql-exception-handling.htm
for good and detailed explanation on exception handling. Sybase and Oracle PL/SQL are similar in such case.

blocks…
exception
when ..
when ..
else ..

Exception Handling

In this section we will discuss about the following,
1) What is Exception Handling.
2) Structure of Exception Handling.
3) Types of Exception Handling.
1) What is Exception Handling?

PL/SQL provides a feature to handle the Exceptions which occur in a PL/SQL Block known as exception Handling. Using Exception Handling we can test the code and avoid it from exiting abruptly. When an exception occurs a messages which explains its cause is recieved.
PL/SQL Exception message consists of three parts.
1) Type of Exception
2) An Error Code
3) A message
By Handling the exceptions we can ensure a PL/SQL block does not exit abruptly.
2) Structure of Exception Handling.

The General Syntax for coding the exception section

DECLARE

Declaration section

BEGIN

Exception section

EXCEPTION

WHEN ex_name1 THEN

-Error handling statements

WHEN ex_name2 THEN

-Error handling statements

WHEN Others THEN

-Error handling statements

END;

General PL/SQL statments can be used in the Exception Block.

When an exception is raised, Oracle searches for an appropriate exception handler in the exception section. For example in the above example, if the error raised is ‘ex_name1 ‘, then the error is handled according to the statements under it. Since, it is not possible to determine all the possible runtime errors during testing fo the code, the ‘WHEN Others’ exception is used to manage the exceptions that are not explicitly handled. Only one exception can be raised in a Block and the control does not return to the Execution Section after the error is handled.

If there are nested PL/SQL blocks like this.

DELCARE

Declaration section

BEGIN

DECLARE

Declaration section

BEGIN

Execution section

EXCEPTION

Exception section

END;

EXCEPTION

Exception section

END;

In the above case, if the exception is raised in the inner block it should be handled in the exception block of the inner PL/SQL block else the control moves to the Exception block of the next upper PL/SQL Block. If none of the blocks handle the exception the program ends abruptly with an error.
3) Types of Exception.

There are 3 types of Exceptions.
a) Named System Exceptions
b) Unnamed System Exceptions
c) User-defined Exceptions
a) Named System Exceptions

System exceptions are automatically raised by Oracle, when a program violates a RDBMS rule. There are some system exceptions which are raised frequently, so they are pre-defined and given a name in Oracle which are known as Named System Exceptions.

For example: NO_DATA_FOUND and ZERO_DIVIDE are called Named System exceptions.

Named system exceptions are:
1) Not Declared explicitly,
2) Raised implicitly when a predefined Oracle error occurs,
3) caught by referencing the standard name within an exception-handling routine.
Exception Name Reason Error Number

CURSOR_ALREADY_OPEN

When you open a cursor that is already open.

ORA-06511

INVALID_CURSOR

When you perform an invalid operation on a cursor like closing a cursor, fetch data from a cursor that is not opened.

ORA-01001

NO_DATA_FOUND

When a SELECT…INTO clause does not return any row from a table.

ORA-01403

TOO_MANY_ROWS

When you SELECT or fetch more than one row into a record or variable.

ORA-01422

ZERO_DIVIDE

When you attempt to divide a number by zero.

ORA-01476

For Example: Suppose a NO_DATA_FOUND exception is raised in a proc, we can write a code to handle the exception as given below.

BEGIN

Execution section

EXCEPTION

WHEN NO_DATA_FOUND THEN

dbms_output.put_line (‘A SELECT…INTO did not return any row.’);

END;

b) Unnamed System Exceptions

Those system exception for which oracle does not provide a name is known as unamed system exception. These exception do not occur frequently. These Exceptions have a code and an associated message.

There are two ways to handle unnamed sysyem exceptions:
1. By using the WHEN OTHERS exception handler, or
2. By associating the exception code to a name and using it as a named exception.

We can assign a name to unnamed system exceptions using a Pragma called EXCEPTION_INIT.
EXCEPTION_INIT will associate a predefined Oracle error number to a programmer_defined exception name.

Steps to be followed to use unnamed system exceptions are
• They are raised implicitly.
• If they are not handled in WHEN Others they must be handled explicity.
• To handle the exception explicity, they must be declared using Pragma EXCEPTION_INIT as given above and handled referecing the user-defined exception name in the exception section.

The general syntax to declare unnamed system exception using EXCEPTION_INIT is:

DECLARE

exception_name EXCEPTION;

PRAGMA

EXCEPTION_INIT (exception_name, Err_code);

BEGIN

Execution section

EXCEPTION

WHEN exception_name THEN

handle the exception

END;

For Example: Lets consider the product table and order_items table from sql joins.

Here product_id is a primary key in product table and a foreign key in order_items table.
If we try to delete a product_id from the product table when it has child records in order_id table an exception will be thrown with oracle code number -2292.
We can provide a name to this exception and handle it in the exception section as given below.

DECLARE

Child_rec_exception EXCEPTION;

PRAGMA

EXCEPTION_INIT (Child_rec_exception, -2292);

BEGIN

Delete FROM product where product_id= 104;

EXCEPTION

WHEN Child_rec_exception

THEN Dbms_output.put_line(‘Child records are present for this product_id.’);

END;

/

c) User-defined Exceptions

Apart from sytem exceptions we can explicity define exceptions based on business rules. These are known as user-defined exceptions.

Steps to be followed to use user-defined exceptions:
• They should be explicitly declared in the declaration section.
• They should be explicitly raised in the Execution Section.
• They should be handled by referencing the user-defined exception name in the exception section.

For Example: Lets consider the product table and order_items table from sql joins to explain user-defined exception.
Lets create a business rule that if the total no of units of any particular product sold is more than 20, then it is a huge quantity and a special discount should be provided.

DECLARE

huge_quantity EXCEPTION;

CURSOR product_quantity is

SELECT p.product_name as name, sum(o.total_units) as units

FROM order_tems o, product p

WHERE o.product_id = p.product_id;

quantity order_tems.total_units%type;

up_limit CONSTANT order_tems.total_units%type := 20;

message VARCHAR2(50);

BEGIN

FOR product_rec in product_quantity LOOP

quantity := product_rec.units;

IF quantity > up_limit THEN

message := ‘The number of units of product ‘ || product_rec.name ||

‘ is more than 20. Special discounts should be provided.

Rest of the records are skipped. ‘

RAISE huge_quantity;

ELSIF quantity up_limit THEN

RAISE huge_quantity;

ELSIF quantity < up_limit THEN

v_message:= 'The number of unit is below the discount limit.';

END IF;

Dbms_output.put_line (message);

END LOOP;

EXCEPTION

WHEN huge_quantity THEN

raise_application_error(-2100, 'The number of unit is above the discount limit.');

END;

/

Nice summary for java jar: class path, index.list and manifest.mf

      08-18-2006
Hello everybody,

I’ve just spent some hours (late hours) fighting against the classpath
in JAR archives, the MANIFEST.MF file and the INDEX.LIST file.

I just want to mention together some of the information I gathered
through internet (and tested on my own). They may be useful to someone
else. I’m not completely sure to have understood everything, though.

1. You can’t launch a Java application using the “java” command, and
using both the -jar (specifies a jar file) option and the -cp
(specifies a classpath) option. They’re mutually exclusive. If you use
the -jar option, then the -cp is completely ignored.

2. You can add a ‘Class-Path’ entry in the manifest file, MANIFEST.MF
file (see http://java.sun.com/j2se/1.3/docs/guide/jar/jar.html). But
you have to be very cautious with it. Several rules apply in the way
you specify your class path:
– Class-Path line can’t be longer than 72 chars (nice one).
– You can break a classpath line into several, but you have to make
the line separation as CR[space][space] (see
http://bugs.sun.com/bugdatabase/view…bug_id=4295946)
– All classpath entries are relatives to the jar archive containing
the manifest.
– A single dot ‘.’ stands for the folder where the jar archive is
placed:
– Classpaths are separated by ‘ ‘ (one space).
– The classpath line must be finished by a carriage return (CR, LF,
or CRLF).

3. If there is a INDEX.LIST besides the MANIFEST.MF, then the class
path specified in the manifest is ignored. This can happen if some of
the jar libraries included in your jar have this INDEX.LIST file. When
you build your jar, you have to break up all jar libraries, and
recompile them into one big fat jar. Some (undesirable?) INDEX.LIST may
pop out to the META-INF folder.

The last point took me some time to figure out. In my case, the culprit
was mysql-connector-java-3.1.11-bin.jar.

Regards,
Mistake
Refers from http://www.velocityreviews.com/forums/t365098-manifest-mf-and-index-list.html.

Besides oracle doc on jar, Once the class loader finds a INDEX.LIST file in a particular jar file, it always trusts the information listed in it. If a mapping is found for a particular class, but the class loader fails to find it by following the link, an InvalidJarIndexException is thrown. When this occurs, the application developer should rerun the jar tool on the extension to get the right information into the index file.
Refer to http://docs.oracle.com/javase/1.4.2/docs/guide/jar/jar.html.

Cool Advise on Using DB cursor

from http://www.sql-server-performance.com/2007/cursors/

If possible, avoid using SQL Server cursors. They generally use a lot of SQL Server resources and reduce the performance and scalability of your applications. If you need to perform row-by-row operations, try to find another method to perform the task.

Here are some alternatives to using a cursor:

Use WHILE LOOPS
Use temp tables
Use derived tables
Use correlated sub-queries
Use the CASE statement
Perform multiple queries

More often than not, there are non-cursor techniques that can be used to perform the same tasks as a SQL Server cursor. [2000, 2005, 2008] Updated 1-29-2009

*****

If you do find you must use a cursor, try to reduce the number of records to process.

One way to do this is to move the records that need to be processed into a temp table first, then create the cursor to use the records in the temp table, not from the original table. This of course assumes that the subset of records to be inserted into the temp table are substantially less than those in the original table.

The lower the number of records to process, the faster the cursor will finish. [2000, 2005, 2008] Updated 1-29-2009

*****

If the number of rows you need to return from a query is small, and you need to perform row-by-row operations on them, don’t use a server-side cursor. Instead, consider returning the entire rowset to the client and have the client perform the necessary action on each row, then return any updated rows to the server. [2000, 2005, 2008] Updated 1-29-2009

*****

If you have no choice but to use a server-side cursor in your application, try to use a FORWARD-ONLY or FAST-FORWARD, READ-ONLY cursor. When working with unidirectional, read-only data, use the FAST_FORWARD option instead of the FORWARD_ONLY option, as it has some internal performance optimizations to speed performance. This type of cursor produces the least amount of overhead on SQL Server.

If you are unable to use a fast-forward cursor, then try the following cursors in this order, until you find one that meets your needs. They are listed in the order of their performance characteristics, from fastest to slowest: dynamic, static, and keyset. [2000, 2005, 2008] Updated 1-29-2009

*****

Avoid using static/insensitive and keyset cursors, unless you have no other choice. This is because they cause a temporary table to be created in TEMPDB, which increases overhead and can cause resource contention issues. [2000, 2005, 2008] Updated 1-29-2009

*****

If you have no choice but to use cursors in your application, try to locate the SQL Server tempdb database on its own physical device for best performance. This is because cursors may use the tempdb for temporary storage of cursor data. The faster your disk array running tempdb, the faster your cursor will be. [2000, 2005, 2008] Updated 1-29-2009

*****

Using cursors can reduce concurrency and lead to unnecessary locking and blocking. To help avoid this, use the READ_ONLY cursor option if applicable, or if you need to perform updates, try to use the OPTIMISTIC cursor option to reduce locking. Try to avoid the SCROLL_LOCKS cursor option, which reduces concurrency. [2000, 2005, 2008] Updated 1-29-2009

*****

When you are done using a cursor, don’t just CLOSE it, you must also DEALLOCATE it. Deallocation is required to free up the SQL Server resources used by the cursor. If you only CLOSE the cursor, locks are freed, but SQL Server resources are not. If you don’t DEALLOCATE your cursors, the resources used by the cursor will stay allocated, degrading the performance of your server until they are released. [2000, 2005, 2008] Updated 1-29-2009

*****

If it is appropriate for your application, try to load the cursor as soon as possible by moving to the last row of the result set. This releases the share locks created when the cursor was built, freeing up SQL Server resources. [2000, 2005, 2008] Updated 1-29-2009

*****

If you have to use a cursor because your application needs to manually scroll through records and update them, try to avoid client-side cursors, unless the number of rows is small or the data is static. If the number of rows is large, or the data is not static, consider using a server-side keyset cursor instead of a client-side cursor. Performance is usually boosted because of a reduction in network traffic between the client and the server. For optimum performance, you may have to try both types of cursors under realistic loads to determine which is best for your particular environment. [2000, 2005, 2008] Updated 1-29-2009

*****

When using a server-side cursor, always try to fetch as small a result set as possible. This includes fetching only those rows and columns the client needs immediately. The smaller the cursor, no matter what type of server-side cursor it is, the fewer resources it will use, and performance will benefit. [2000, 2005, 2008] Updated 1-29-2009

*****

If you need to perform a JOIN as part of your cursor, keyset and static cursors are generally faster than dynamic cursors, and should be used when possible. [2000, 2005, 2008] Updated 1-29-2009

*****

If a transaction you have created contains a cursor (try to avoid this if at all possible), ensure that the number of rows being modified by the cursor is small. This is because the modified rows may be locked until the transaction completes or aborts. The greater the number of rows being modified, the greater the locks, and the higher the likelihood of lock contention on the server, hurting performance. [2000, 2005, 2008] Updated 1-29-2009

*****

In SQL Server, there are two options to define the scope of a cursor. LOCAL and GLOBAL keywords in the DECLARE CURSOR statement are used to specify the scope of a cursor. A GLOBAL cursor can be referenced in any stored procedure or batch executed by a connection. LOCAL cursors are more secure as they cannot be referenced outside the procedure or trigger unless they are passed back to the calling procedure or trigger, or by using an output parameter. GLOBAL cursors must be explicitly deallocated or they will be available until the connection is closed. For optimum performance, you should always explicitly deallocate a cursor when you are done using it. LOCAL cursors are implicitly deallocated when the stored procedure, the trigger, or the batch in which they were created terminates. We can use LOCAL cursors for more security and better scope of the cursor in our application, which also helps to reduce resources on the server, boosting performance. Contributed by Nataraj Prakash. [2000, 2005, 2008] Updated 1-29-2009

sybase SP “set chained off”

http://search.sybase.com/kbx/solvedcases?id_number=41046890

Case Description
Symptom 1 – Receiving following error in debug log: “powersoft.powerj.db.java_sql.Query: [jf_pif.query_pif update query in getMoreResults] could not obtain more results due to exception: com.sybase.jdbc.SybSQLException: Stored procedure ‘sp_proc_name’ may be run only in unchained transaction mode. The ‘SET CHAINED OFF’ command will cause the current session to use unchained transaction mode.”

Symptom 2 – Using a stored procedure as a datasource for a datawindow, receiving the following error: “Select error:Stored procedure ‘sp_proc_name’ may be run only in unchained transaction mode. The Set Chained Off command will cause the current session to use unchained transaction mode.”

Tip or Workaround
A stored procedure that executes against an Adaptive Server Enterprise (ASE) database can be set to run in one of three transaction modes: CHAINED, UNCHAINED, and ANY. The sql statement “set chained on/off” will set the chained mode on and off, respectively. The transaction mode ANY will run in both chained or unchained mode.

PowerJ uses jConnect (JDBC) to connect to ASE databases. The chained mode is tied to the AutoCommit() mode of the JDBC connection.

If the stored procedure is set to run in Unchained mode, then set AutoCommit to True, and if the stored procedure is set to run in Chained mode, AutoCommit should be set to False. If the stored procedure is set to run in ANY transaction mode, it doesn’t matter what the AutoCommit mode is.

The AutoCommit mode can be found on the property sheet of the PowerJ transaction object. To adjust the AutoCommit mode, double-click the transaction object in your PowerJ project. Click on the Options tab.

To set AutoCommit to True: Under Initial Settings, choose “Set the following properties” and check (enable) AutoCommit. Click on OK.

Once you have adjusted the AutoCommit mode, re-run the project.

{ASE ships a stored procedure called sp_proc_xmode. It takes two parameters: a stored procedure name, and a transaction mode. This stored procedure can be used to change the transaction mode of any of your existing stored procedures.}
Resolution
To rectify the ‘Set Chained Off’ error, do the following:

1. Check the transaction mode of the stored procedure

2. If the mode of the stored procedure is Unchained, set AutoCommit to True. If the transaction mode is Chained, set AutoCommit to False.

========================

basically, its
EXEC sp_procxmode ‘dbo.p_CheckHasCashOrder’,’unchained’

if autocommit is set true.

node js, and javascript programming language, and js engine

node js, is something newly actively talked, new tech. It’s basically a runtime based on v8 js engine (from Google), and understands/parse the passed in javascript, for action.

And for node js, its mostly for set up a web server.

refer to nodejs.org/.

this on the other side, proves how google’s chrome os would work.

V8 engine is enough to talk to the hardware,(parsing/understanding JS).

winrar cracker

I used to install the winrar cracked (mostly by chinese developers) version.

I am not fancy over cracked software, always.

Then just now, good to know, that there is one way to register winrar nicely,

把下面这段复制后粘贴到记事本里,保存为rarreg.key,注意后缀名不是.txt而是.key。然后把这个rarreg.key复制粘贴到winrar安装文件夹覆盖原来的文件就行了。

RAR registration data
Team EAT
Single PC usage license
UID=c97811c0f0ceeb28c500
6412212250c50047bf9963514c7f193eb62337f99488c06cee841a
6ccc09124aeb25972ad56035c6ab9048e2c5c62f0238f183d28519
aa87488bf38f5b634cf28190bdf438ac593b1857cdb55a7fcb0eb0
c3e4c2736090b3dfa45384e08e9de05c5860d61b3fad98ab846f2c
62a962e2dbbce87706fecf1abea5e40bc5f7d840f55cfb5a4f5a39
f9141fe9b383ce10abb9ed6d61be1f4b52e0777efdbaa0c9608eb9
9f075b0c716d7203d0b5e1e5ced22523726d8ec919c13662850743

refer to http://zhidao.baidu.com/question/305838317.html.

good explanation and solution, for jboss server start up and access deinied exception

from http://www.brianmokeefe.com/node/9

Adventures with JBoss
Fri, 09/25/2009 – 16:43 — brian

I’ve spent a couple days this week venturing further into the Java world after a long absence. My goal this week was to install and learn how to use JBoss. It was a bear at times, but I feel like I am on my way to understanding it. There were two big issues that caused me fits, and I wanted to share, as the answer did not seem readily available on Google.

The first big issue was that I could start JBoss perfectly fine, but whenever I tried to stop and restart it, it would not work. Peering through the log files, I came across the following
2009-09-24 12:57:23,653 ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss:service=NamingProviderURLWriter state=Create mode=Manual requiredState=Installed
java.io.IOException: Access is denied
It seemed that a file was being locked for some reason, and that file was obviously necessary for JBoss to start. Some further investigation (ok, I admit, I just tried to delete all of JBoss with the faith that it would fail on the locked file), it turns out that file was some file called jnp-service.url in the data directory of the default server. After downloading and installing the handy Unlocker tool (http://ccollomb.free.fr/unlocker/), it turns out that cidaemon.exe (aka the Windows Indexing Service) was locking the file for some reason. Turning off the Indexing Service solved that problem.

The second issue came when I attempted to deploy my WAR file with my JAX-WS service into my new JBoss server. Again, I could not get it to deploy, and a review of the log files showed
“vfsfile:/C:/dev/Workspaces/BeliefNet/.metadata/.plugins/org.jboss.ide.eclipse.as.core/JBoss_5.1_Server/deploy/RAFTSOA.war/” is in error due to the following reason(s): java.lang.StringIndexOutOfBoundsException: String index out of range: -1
A strange and cryptic error indeed. As you can see, I was using the JBoss tools integration with Eclipse, so I tried just exporting the WAR file and copying it manually to the JBoss deploy directory, but still no luck. With nightmares of “pouring through source code to see what JBoss is doing when it throws this exception” dancing in my head, luckily Google gave a bunch of pieces to the puzzle to what might be causing this. It turns out that JBoss requires all servlets and such to be inside a package. This was not the case when I was just deploying in Tomcat alone, but luckily this is among the easiest problems to fix.
Happy to say, after those issues were resolved, my sample JAX-WS application is up and running. Now I just have to unlock the puzzle of JAX-WS, JAXB, and how everything lives in harmony to create a viable SOA solution….

sybase restrictions— inner member of outer join, for regular join

refer to Sybase Book – Chapter 4: Joins: Retrieving Data from Several Tables
for this
Outer join restrictions

If a table is an inner member of an outer join, it cannot participate in both an outer join clause and a regular join clause. The following query fails because the salesdetail table is part of both the outer join and a regular join clause:

select distinct sales.stor_id, stor_name, title
from sales, stores, titles, salesdetail
where qty > 500
and salesdetail.title_id =* titles.title_id
and sales.stor_id = salesdetail.stor_id
and sales.stor_id = stores.stor_id

Msg 303, Level 16, State 1:
Server ’FUSSY’, Line 1:
The table ’salesdetail’ is an inner member of an outer-join clause. This is not allowed if the table also participates in a regular join clause.

If you want to know the name of the store that sold more than 500 copies of a book, you must use a second query. If you submit a query with an outer join and a qualification on a column from the inner table of the outer join, the results may not be what you expect. The qualification in the query does not restrict the number of rows returned, but rather affects which rows contain the null value. For rows that do not meet the qualification, a null value appears in the inner table’s columns of those rows.

as for me, I am coding,

--and t.account_id *= ca.client_account_id
--and t.fund_account_id *= f.fund_account_id
    and (t.account_id = ca.client_account_id or t.fund_account_id = f.fund_account_id)

it then throw me above error, as ca, and f are inner member of the outer join, at the same time, trying to do another regular join.

comment the first two condition, as in the code, solved the problem.

version control on local pc

I used to install one subversion server on a company pc before.
I was doing something similar to this SVN server set up before. Run sequence of commands, to set up the repository.

however, now I found from apache, there are some windows installer which would do above work nicely.

Visual SVN server
Developer’s life are becoming better and better. 😀

=======

I might be not completely. Previously the subversion project is located at http://subversion.tigris.org/, and that time, I think it’s common to run those commands to set up subversion server. Then nowadays, as stated from above website,
“This is the former website of the Subversion software project, which now calls subversion.apache.org home.”
It’s currently in transition phase to apache. Then at subversion.apache.org, actually the windows (other OS) also, packages are newly available.

XSS again

helping a colleague from another team on one issue.

basically, this is the problem. From accounts application (java EE web application), there is a hyperlink for a Infoview.jsp.

This Infoview.jsp does is, to add cookies to the client browser, and use these cookies for single-sign-on. So user-id, for example has been put into these cookie. Then immediately, it would redirect to another connectiv.jsp, which would simply check the cookie value and verify where to proceed.
This connectiv.jsp, is on another web application on another server.

Okie, so such settings, it has been working well before. Then now, they try to upgrade, from tomcat 5.1 to 5.5, Jboss upgrade also, and on another server. Then problem arises, when click the hyperlink for infoview.jsp, instead of launching the other application, it would ask for user name and credential.

They are thinking maybe tomcat 5.5 and 5.1 architect changes.. Err…

after some checking, it’s most likely XSS again. because they shift the accounts application to another server. And when used, they use the cname of this server. However, which this infoview.jsp, try to add cookie, it’s adding cookie to the full name domain. so for browser, it seems like it’s “cross site scripting”. The browser is requesting for infoview.jsp on server A, however, this jsp then try to set up cookie for domain B.

Which is insecure!!! Don’t blame the browser, that’s what it can do in nowadays. And even if it try to do more, you would blame it for the slow response.

And also, there are some cache problems. Need to remove the xxx_jsp.java, and xxx_jsp.class.

That’s all.

maven multi module

For maven project settings, pom.xml, there are actually two ways to inherit the common settings, (like OOP), inheritance and composition.

Inheritance, refers to the parent project.

composition refers to the multi-module project. The “root” configuration actually would be shared among the sub-modules in this project.

refer to Mavan Book (multi-module)

and Maven Multi Module

Choice between GWT and ZK

My team has been using an in-house framework (Echo) built some years ago.

The idea is cool, design is nice, and implementation good. With good developers they achieve this.

The framework is created sometime ago, even before spring, struts, hibernate become popular. So at that time, the framework is designed to act as an one-stop solution for all, from UI, to service, to persistence. Till now, it’s slightly outdated. As the service part has been replaced with Spring, the persistence part has partially moved aside for using hibernate.

However, there is still one layer (UI) intensively using it. This should thanks to the great integration with ZK.

The framework (Echo) has created a lot of extension to ZK, form element, container element, eventhandler element etc. It has been in same ideology we have been using for the DB UI framework, creating bean to generate the HTML output corresponding each such XML element (forms, container, eventhandler, action etc).

Great ideas are similar. Smart are in similar ways.

That’s how I come to know ZK. The UI framework by some Chinese developers, based on their client-server fusion architecture. Basically, if I understand correctly, it’s translating ZHTML file, (their languge/context) into HTML, at the same time, put their javascripts, client layout engine. Server side, they have the couterparty to listen to their javascript’s ajax calls.

Not bad. Especially plugin with ZK studio.

Just drawback, debugging is really hard. At the same time, compared to GWT, Google is better in architecture, and design.

Simply, I vote for GWT.

refer to Zk vs GWt

ZK Architecture

GWT Overview

maven projects

Since last week, I have been working on an automation testing project (volunteering).

It’s developed in my Eclipse, with Spring and Maven plugin. The technology used is Spring (DI or IoC), Camel (registering each test case as a camel routes) and selenium (testing framework); jasper report (for output) and java mail (for publisher).

It’s been working well within eclipse, invoke within.

Then I plan to publish to the team, and it supposed to be maven (mvn commands, the default approach).

however, wtf, 6pm I started preparing on this “packaging” till now finally solved. HA, 3 hours around (dinner, mrt, games excluded).

Thanks to maven.

I like the structure it required.

my-app
|-- pom.xml
`-- src
    |-- main
    |   `-- java
    |       `-- com
    |           `-- mycompany
    |               `-- app
    |                   `-- App.java
    `-- test
        `-- java
            `-- com
                `-- mycompany
                    `-- app
                        `-- AppTest.java

refer to Maven Project Structure
which is PERFECT!!!

(Actually, my own problem. I am not YET expert on maven!)

and the command I used “mvn clean compile test”
and I have been using exec-maven-plugin, to start my application by invoking the boot strap class (Main.java simple load the spring application context).

Refer to 3 ways to run java from maven

Another good day.

Installing windows 7

2 years ago, when I am trying to install windows 7 32 bit from Win xp. It goes very smoothly, as it can started from the running win xp os.

Now I try to upgrade it from 32 bit win 7 to 64 bit. Few days ago, when I tried, it throw me this cd/dvd driver needed exception.

Now when I try to reburn the iso image, i found the autorun.inf (maybe) shouldn’t be there.

====================

Really speechless to this Windows 7 64 bit upgrade experience.

there are so many frustrations encountered, and its been almost 3 weeks.

1. downloading this iso file from msdn got many problems. throws error at last minute, if downloading from my lenovo win 7 laptop; cant copy to my thumb drive, if download from my office pc;
2. download using my toshiba windows xp, the download .iso file seems got problem , so its always throw me, cd/dvd driver missing problem
3. try write this iso to windows 7 usb tool, then always tell me this iso file invalid
4. finally find solution to make this iso file valid, by checking/unchecking the “udf” property of this iso file, again, throw me missing driver
5. finally redownload using my clean toshiba win xp, then using windows 7 usb tool to burn to usb, then throw me exception, “unable to run bootsect”, then when boot with this usb, throw me, “ntldr missing”
6. try burn to dvd using this iso file, and windows 7 usb tool, then half way halt, saying unable to read disk..

ooooooooooooooooops.

now finally installing, using the iso downloaded with my clean toshiba windows xp, and burn with windows 7 usb tool to my thumb drive, then using bootsect.exe to update the boot code.

finally using this thubm drive to install windows 7 64 bit, in progress..

====================================

whatever, its finally finished installation and working well.

Everyday there is something new..

Maven filtering.  It would replace the file, containing property, it knows its replacement value.

There seems many maven plugin got this capability. Configure the directory, the files, and filtering equals to true. Then maven will come in do its job.

Google guice is nothing fancy. It’s just an dependency injection tool, as spring parts, however, everywhere there is spring already.

Apache camel and spring integration are similar. Which are enterprise integration pattern, it will link from endpoint to process, to another endpoint.

And some beauty of camel, it can create the component ( the endpoint factory), based on uri.  It can integrate with guice, which can bind endpoint annotation to component provider, so to create customized uri, or customized component. Like jmsus: queue name. Instead of jms:queue name.

Posted from WordPress for Android

ORM – object relation mapping

Team is encountering an exception, “unknown enum value for (some enum class) : (some enum constant name)” kind of exception.

I have been keep debugging, following the stack trace for half a day. And since the stack trace goes so deep, my eclipse crash many times.

This is another reason why I hate persistence, and prefer mybatis.

We should have a control of how we store our object, and what kind of data we wanna retrieve from database. We can decide what’s the update, insert and queries. Instead of leaving to jpa, or hibernate, which might forever won’t be able to be perfect. As they are trying to suit for all kinds of objects.

The exception we encountered, is during the process of retrieving the data. We are trying to get an message object. Then this message object actually linked to an exception object, by the foreign key, exceptionid. Then inside this exception object, there is one property which is an enum type. And this exception again maps to an exception table. (I hate this kind of object, table links, again. And there are too many restrictions in order to make the forever not perfect orm work.)

When retrieving the exception object, the not so famous exception, “unknown enum value for ..” occurred. We have annotated the property as enumerated, and the type as string. And i have tested, enum.valueof(enum lass, enum const name), returns correct. Somewhere, the jpa, hibernate, plus the enumerated annotation breaks down. Probably they need some more restricted setting to tell them, what map to what, and how. Shit.

At the sad time, for debugging process, I have made another conclusion, based on this time experience. I am always stubborn as trying to follow the sequence, follow the order, from beginning to end, wanna know all details, which doesn’t work in some scenarios, and especially during process of debugging such deep stack trace.

Follow the exception, trace reverse should be better solution. put a break point at the exception.

Trace in reverse order!

=============

It turns out, it is not the problem of jpa or hibernate. Instead, it’s the database column type problem. The column is using char 12. So even though enum constant name is stp, only 3 character. However, once put into database, and later wanna retrieve this column, it retrieves as char12, with 9 spaces trailing stp.

Enum.value of(enum class, enum constant name), which us used by hibernate to persist out the data as the enumeration constant, can’t find the enum constant as such.

Google music

Google music is google’s version of cloud music.

I have tried on that, even though on browser only, it works very well. Streaming like on local pc.

Interface rather nice also.

It will auto sync your local music to cloud. At the same time, out of good will, google offer a bunch of free music.

Currently it’s not available to public. However, go to music.google.com, make a request, within one week, they will grant you access.

If you are based in us, there is even an android app for this. I heard it works very well.

Apache camel

Apache camel is very cool in some sense that, it might be working like spring integration, or more as a enterprise integration pattern.

If with spring, it starts from camel context. Like spring application context. Within the camel context, the route builder should be specified. Key method within this route builder is the configure method.

Normal pattern is like from(one endpoint).bean ref(bean name, method).to(another endpoint) within the configure method.

The endpoint can be mq, timer, ejb etc.

The route builder will listen to the queue, for example, then upon any message comes, it will invoke the method of the bean, and possibly would send to another queue for example.

why annotated enity is not loaded in exploded jar?

why annotated enity is not loaded in exploded jar?

while the same works in non-exploded jar?? why ?

====================================================

finally it’s solved.

LocalEntityManagerFatcoryBean, invokes its persistent provider (HibernatePersistence here), which in turn invokes EjB3Config, which invokes ZippedJarVisitor (dependent on the URL protocol, vzsfile). The ZippedJarVisistor would scan and add the Entity annotated entities. However, it would skip, if it’s a directory.

Solution is, within persistent.xml, specifiically tell persistent-unit, that to scan and add com.abc.UserClass which is the entity.

<persistent-unit ..> < include-class > com.abc.UserClass </include-class > </persistent-unit >

Another conclusion is, follow the exception, follow the trace, trace from top to bottom following stack trace.

Zk framework

I am currently studying Zk framework. Let’s see how fast and how well Ill grab it.

=============================================
I still can’t appreciate the fancy of such framework. Instead of ZK’s new language, why not we ourselves pick up javascript, jquery ourselves. Why not we have our own say, instead of letting ZK control. Plus there is a performance impact using its translation mechanism.

XSL, XSLT, XPath

I have spent some time on internet, and finally come out with this excerpt for part of the task on hand.
using XSLT, and xalan as for extension.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="2.0"
	xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:java="http://xml.apache.org/xalan/java">
	<xsl:template
		match="el_tnx_record | lc_tnx_record | sg_tnx_record | tf_tnx_record">

		<!-- define variables here -->

		<!-- time variables -->
		<xsl:variable name="calendar"
			select="java:java.util.Calendar.getInstance()" />
		<xsl:variable name="millsTime" select="java:getTimeInMillis($calendar)" />
		<xsl:variable name="year" select="java:get($calendar,1)" />
		<xsl:variable name="month" select="java:get($calendar,2)" />
		<xsl:variable name="day" select="java:get($calendar,3)" />
		<xsl:variable name="hour" select="java:get($calendar,10)" />
		<xsl:variable name="minute" select="java:get($calendar,12)" />
		<xsl:variable name="second" select="java:get($calendar,13)" />

		<!-- hashMap variables -->
		<!-- <xsl:variable name="hashMap" select="java:java.util.HashMap.new()" 
			/> -->
		<!-- <xsl:variable name="01" select="java:java.lang.String.new(01)" /> -->
		<!-- <xsl:variable name="new" select="java:java.lang.String.new(new)" /> -->
		<!-- <xsl:variable name="void" select="java:put($hashMap, $year, $month)" 
			/> -->
		<!-- <xsl:variable name="status" select="java:get($hashMap, $01)" /> -->


		<!-- the content should be dynamic values -->
		<EAI>
			<BWH>WLGPSG03EBN EBNRS001 20080520093610 TR_LC_MODCAN_F -1
				UOB.ELK.PROD.EBNRQST03
				UOB.ELK.PROD.EBNRPLY03 CMOS9672 CMOS9672VSA
				VSASS009 XXXSS009
				UOB.XXX.PROD.YYYRQST UOB.XXX.PROD.YYYRPLY RUS
				201105202889281          
			</BWH>
			<SvcRq>
				<ChannelId>BIBnn</ChannelId>
				<SvcCode>
					PPPTR_LC_MODCAN_F</SvcCode>
				<SvcRqId>
					<xsl:value-of select="$millsTime" />
					<xsl:value-of select=" ./ref_id" />
				</SvcRqId>
				<Timestamp>
					<Year>
						<xsl:value-of select="$year" />
					</Year>
					<Month>
						<xsl:value-of select="$month" />
					</Month>
					<Day>
						<xsl:value-of select="$day" />
					</Day>
					<Hour>
						<xsl:value-of select="$hour" />
					</Hour>
					<Minute>
						<xsl:value-of select="$minute" />
					</Minute>
					<Second>
						<xsl:value-of select="$second" />
					</Second>
				</Timestamp>
				<TimeoutPeriod>60</TimeoutPeriod>
				<ErrRecoveryReversal>Y</ErrRecoveryReversal>
				<BusinessDrivenReversal>Y</BusinessDrivenReversal>
				<MQExpPeriod>55</MQExpPeriod>
			</SvcRq>
			<ReversalSvcRq>
				<PrevSvcRqId />
			</ReversalSvcRq>
			<BIBRq>
				<UserId>ABCD123</UserId>
				<CIFNo>1234567</CIFNo>
				<CompanyId>
					ABCDEFG
				</CompanyId>
			</BIBRq>
			<SubSvcRq>
				<SubSvc>
					<SubSvcRqHeader>
						<SvcCode>TR_LC_MODCAN_F</SvcCode>
						<SubSvcSeq>1</SubSvcSeq>
						<TxnRef>20013192529</TxnRef>
					</SubSvcRqHeader>
					<SubSvcRqDetail>
						<xsl:element name="{name()}">
							<!-- copy all other node, except charges, cross_references, and additional_field -->
							<xsl:copy-of
								select="@*|node()[not(self::charges)][not(self::cross_references)][not(self::additional_field)]" />

							<!-- create new element using the value of the name attribute of additional_fieldd -->
							<xsl:for-each select="additional_field">
								<xsl:element name="additional_field_{@name}">
									<xsl:copy-of select="node()" />
								</xsl:element>
							</xsl:for-each>

							<!-- append cross_reference tag with position -->
							<xsl:if test="cross_references">
								<cross_references>
									<xsl:for-each select="cross_references/cross_reference">
										<xsl:element name="cross_reference_{position()}">
											<xsl:copy-of select="@*|node()" />
										</xsl:element>
									</xsl:for-each>
								</cross_references>
							</xsl:if>

							<!-- append charges tag with position -->
							<xsl:if test="charges">
								<charges>
									<xsl:for-each select="charges/charge">
										<xsl:element name="charge_{position()}">
											<xsl:copy-of select="@*|node()" />
										</xsl:element>
									</xsl:for-each>
								</charges>
							</xsl:if>
						</xsl:element>
					</SubSvcRqDetail>
				</SubSvc>
			</SubSvcRq>
		</EAI>
	</xsl:template>
</xsl:stylesheet>

XSL, XSLT, are defined by W3C. it’s some language quite limited, as compared to java, or c, actually might not be suitable for comparison.

There are only limited keywords, functions defined. And XPath, navigation is especially commonly used with XSLT.