Friday, September 29, 2006

New to Java technology

alphaWorks provides emerging Java technologies for every skill level: learning the Java programming language, fine-tuning your skills, or using emerging technology components to speed development time as you innovate with your own applications. In addition, demos, discussion forums, and resources allow you to interact with creators of the technology and the broader user community.

1. What is Java technology?
Java technology is both an object-oriented programming language and a platform developed by Sun Microsystems. Java technology is based on the concept of a single Java virtual machine (JVM) -- a translator between the language and the underlying software and hardware. All implementations of the programming language must emulate the JVM, thus enabling Java programs to run on any system that has a version of the JVM. Learn more about Java technology from the developerWorks library of articles and tutorials.


2. Getting started with emerging Java technologies
Emerging Java technology is available in the following categories on alphaWorks:
APIs - Application Programming Interfaces are sets of classes, interfaces, and principles of operation that constitute a Java extension. They are scalable for use in domains ranging from first-party call control in a consumer device to third-party call control in large, distributed call centers.
Application Development - Application development resources vary from information for developers and software managers to tools and applications that provide time- and cost-effective foundations for solution development.
Components - Java components are self-contained elements of software that can be controlled dynamically and assembled to form applications.
Developer Kits - These include Software Development Kits for compiling and class files for coding Java.
IDEs - Integrated Development Environments (IDEs) use an editor pane to display source code and indicate the correct line. An IDE allows collaboration on projects using a platform of choice.
Reference Implementation - This category includes extended classes and code for adding to existing Java functionality.
Utilities – These are a large collection of useful tools for creating Java applications, including tools that parse numbers into integer, long, or double values, non-numerics, and string padding.


3. How can I improve my Java programming skills?
Various alphaWorks technologies can help you learn and improve your Java programming skills. If you’re a beginner, try out the following technologies: CodeRuler or CodeRally, both of which are programming games ideal for helping Java novices to become familiar with the Java programming language while competing in fun games.
If diagnostics and testing are what you are after, here are some popular alphaWorks technologies to try out on your applications:
Structural Analysis for Java, a technology that analyzes structural dependencies of Java applications in order to measure their stability.
Diagnostic Tool for Java Garbage Collector, A diagnostic tool for optimizing parameters affecting the garbage collector when using the IBM Java Virtual Machine.
HeapAnalyzer, which allows the finding of a possible Java heap leak area through its heuristic search engine and analysis of the Java heap dump in Java applications.
In addition, the developerWorks Java technology zone provides a wealth of resources about XML; these include articles, tutorials, and tips.
Browse through the numerous new Java technologies or search for a Java topic to find a technology of interest to you. You can also join the discussion about any alphaWorks technology in order to learn more. And let us know what you think; your feedback is important to us is shaping the alphaWorks site and what we bring you.

(Back track URL : http://www.alphaworks.ibm.com/java/newto#03)

Thursday, September 21, 2006

Why employees leave organizations ? By Azim Premji , CEO- Wipro

Every company faces the problem of people leaving the company for better
pay or profile.

Early this year, Mark, a senior software designer, got an offer from a
prestigious international firm to work in its India operations
developing specialized software. He was thrilled by the offer.

He had heard a lot about the CEO. The salary was great. The company had
all the right systems in place employee-friendly human resources (HR)
policies, a spanking new office,and the very best technology,even a
canteen that served superb food.

Twice Mark was sent abroad for training. "My learning curve is the
sharpest it's ever been," he said soon after he joined.

Last week, less than eight months after he joined, Mark walked out of
the job.

Why did this talented employee leave ?

Arun quit for the same reason that drives many good people away.

The answer lies in one of the largest studies undertaken by the Gallup
Organization. The study surveyed over a million employees and 80,000
managers and was published in a book called "First Break All The Rules".
It came up with this surprising finding:

If you're losing good people, look to their immediate boss .Immediate
boss is the reason people stay and thrive in an organization. And he 's
the reason why people leave. When people leave they take
knowledge,experience and contacts with them, straight to the
competition.

"People leave managers not companies," write the authors Marcus
Buckingham and Curt Coffman.

Mostly manager drives people away?

HR experts say that of all the abuses, employees find humiliation the
most intolerable. The first time, an employee may not leave,but a
thought has been planted. The second time, that thought gets
strengthened. The third time, he looks for another job.

When people cannot retort openly in anger, they do so by passive
aggression. By digging their heels in and slowing down. By doing only
what they are told to do and no more. By omitting to give the boss
crucial information. Dev says: "If you work for a jerk, you basically
want to get him into trouble. You don 't have your heart and soul in the
job."

Different managers can stress out employees in different ways - by being
too controlling, too suspicious,too pushy, too critical, but they forget
that workers are not fixed assets, they are free agents. When this goes
on too long, an employee will quit - often over a trivial issue.

Talented men leave. Dead wood doesn't.

"Jack Welch of GE once said. A company's value lies "between the ears of
its employees".If it’s bleeding talent, it’s bleeding value.

Unfortunately, many senior executives busy traveling the world, signing new deals and developing a vision for the company, have little idea of what may be going on at home. That deep within an organization that otherwise does all the right things, one man could be driving its best people away.

Wednesday, September 13, 2006

10 security problems unique to IT By Jeff Relkin

Takeaway:
Organizations face a host of security concerns driven by the power of technology and the vulnerabilities inherent in its use. IT pros have to be vigilant about all these issues, from system penetration threats to hardware portability to employee turnover.

Security is not an area newly arisen in the wake of the 9/11 tragedy. There have always been reasons to be concerned: conflicting priorities, business environmental factors, information sensitivity, lack of controls on the Internet, ethical lapses, criminal activity, carelessness, and higher levels of connectivity and vulnerability. It's a tradeoff between limiting danger versus affecting productivity: 100 percent security equals 0 percent productivity, but 0 percent security doesn't equal 100 percent productivity.

No one wants to be controlled. It's demeaning and stifles productivity, and we resent the implication that we can't be trusted not to break our own networks. On the other hand, organizations have to decide how long they could operate without computers or networks and how reliant they are on the availability and accuracy of data. Absolute security is unattainable and undesirable, so proper security controls seek to reduce risk to acceptable levels.

#1: System penetration threats


There are all kinds of ways in which systems can be compromised. A popular expression during World War II was "Loose lips sink ships," which was meant in a possibly somewhat paranoid way to heighten awareness that you never knew who was listening to you, even over a beer at the local pub. Most of us routinely have contact with other professionals whether at industry gatherings, social events, or any number of other venues. It's all too easy to accidentally disclose critical information that can be used, however unethically or even illegally, to benefit one organization at the expense of another.

Carelessly discarding access codes and other kinds of personal identification information without shredding them has made dumpster diving the number one method of obtaining this kind of data. Systems that are poorly or inadequately secured (single-level security, easily guessed passwords, unencrypted data, etc.) are an invitation to problems ranging from low data quality to unauthorized infiltration.

Networks can be easily breached due to poorly maintained firewalls and/or virus and spam filters. Security budgets must be adequately funded; management literally puts organizational survival at risk by viewing funding for security measures as a no-return or discretionary expense. Taking responsibility for our own actions (or inactions) coupled with a solid comprehensive security policy is the best defense to prevent breaches from occuring in the first place.

#2: Internet security realities


Originally built for military use, the Internet today incorporates little inherent protection for information. Administrators at any Internet site can see packets flying by, and without adequate encryption, messages are subject to compromise. The Internet doesn't automatically protect organizational information--companies must do so independently. Without adequate control, and even with it, employees can access just about anything and bring it in-house. External intruders can access networks and PCs. External message sources typically can't be found, and message senders don't know who else, in addition to or instead of the intended recipient, is reading the message.

The hacking community is increasingly organized, and by cooperating with each other, networks can be even more easily, and profoundly, compromised. The Internet is an open, uncontrolled network that doesn't change to suit organizational needs. Identified exposures are not automatically fixed, and most security problems on the Internet are not really Internet problems. Organizations must assume a potentially hostile environment and protect themselves through full message encryption for sensitive information, digital signature for message authentication, high quality maintained firewalls and other filters, employee communication and awareness programs, and any inbound controls that are at least adequate without being excessive.

#3: Portability of hardware


Corporate road warriors traveling with laptops represent a variety of security challenges. Larger, faster hard drives and more powerful processors provide the ability to download and use local copies of sensitive or confidential databases. Ubiquitous Internet access allow us to stay connected with the same networks and systems we use in the office. Web-based services such as Groove can be used to circumvent corporate document policies.

Laptops need to be secured with at least two-phase security controls consisting of a combination of encryption, local userid/password combinations, biometric devices, etc., and organizations need to implement and enforce strict policies on technology use while traveling.

#4: Proliferation of new communication methods


Does your organization provide PDAs such as BlackBerrys or Treos with network connectivity? Are these devices secured in any way? Many companies have little understanding of just how big a security threat these handy little gizmos represent. Typically connected to central corporate services, such as Outlook or Notes, and providing continuous wireless automatic synchronization with e-mail, calendar, and contact lists, a lost device that's unsecured by a password can be used to gain authorized entry into those systems. At the very least, they can be used to run up a pretty impressive cell phone bill.

Corporations should require that despite the inconvenience, all such devices must have local passwords, subject to the same rules as those used to access the network, including format and frequency of change. They should also require by policy that lost devices be reported immediately so kill signals wiping all local data and rendering the device useless can be issued.

#5: Complexity of software


The fact that systems and applications have many integrated components that are difficult to individually secure is a poor excuse for not requiring multiple levels of security. Users who have been authenticated for general network access do not necessarily deserve authorization for specific functional components of that network or even within a single integrated environment, such as an ERP. Studies and surveys tell us that employees consider too many different passwords a valid reason for leaving an organization; some large corporations require users to memorize in excess of 15 userid/password combinations. Single sign-on techniques provide the ability to secure systems one component at a time on the basis of one individual access, so there's no reason to make security onerous to the user community.

#6: Degree of interconnection


This is just another form of complexity and requires a recognition of the realities of the public access Internet. Supply chain processes connect raw material providers, manufacturers, assemblers, and retailers. As the saying goes, a chain is only as strong as the weakest link. Even if individual organizations within the supply chain have proper security controls in place, one lapse by one of the partners can bring the entire operation to a halt.

Consider a situation in which a parts supplier's network is infiltrated and/or compromised. All the downstream component processes can be negatively affected, either by the delay or loss of a critical ingredient or by a contaminated input, in the same manner that a glitch at the start of an assembly line brings the entire operation to a screetching halt. Organizations need to conduct a comprehensive risk assessment and try to require their partners and suppliers to adhere to adequate security controls, or at the very least, develop contingencies around the possibility of losing access to critical partnerships.

#7: Density and accessibility of media


Information is currency, and knowledge is power. Knowing this, we're all responsible for maintaining the integrity and security of the corporate data to which we have authorized access. New forms of higher density portable media make it even more necessary to take this responsibility seriously. CDs, DVDs, flash drives, and other dense portable media are capable of storing multi-gigabytes of data in a form that all too often grows legs and walks away.

Corporate users should be circumspect about how they use these media. IT security policy should require that any data moved through USB ports or any other method of creating media do so on an encrypted basis. Policy, and common sense, should also dictate that these same media types never be used for single copies of any data, especially mission critical or business confidential, and limit their use to temporary movement of data from one location to another.

#8: Centralization


Single points of failure can be security nightmares. As important as it is to secure corporate networks, systems, and data, it's especially critical to do so when those assets are centrally located. Smaller organizations with limited technology resources are particularly vulnerable because they typically have one LAN room or one server rack, which is the entire network for the whole organization.

Unauthorized access, power problems, communications glitches, protocol incompatibilities, and questionable system philosophies can all contribute to catastrophic consequences. When technology assets are centralized either as a result of limited resources or simply due to a valid design consideration, attention must be given to special security requirements to ensure continuous operation.

#9: Decentralization


The opposite situation comes with security considerations of its own. Multiple copies of individual systems or databases all must be equally well secured; one compromised copy renders the entire application suspect. One of the more difficult situations to deal with in global organizations with presences in various countries occurs where Internet access is neither robust, consistent, nor reliable. In this case, the best solution is often to install a distributed DNS server for offline synch with the main corporate network, providing a local facility that while not real time, is at least a comprehensive copy no more than one half day old of necessary data. Since this requires putting sensitive or confidential information out into the field, policies and procedures must be enforced that provide the same level of security for the decentralized facility as that for the main corporate network to avoid the same risks of infiltation and compromise.

#10: Turnover


Employees changing jobs represent a particularly difficult security challenge. A generation ago, you'd simply turn in your keys and go on with your life, but it's not so easy to do that when the keys are virtual entries into secure systems.

Every access granted to individual employees has to be tracked so that at departure time, those accesses can be turned off. In some cases, security systems will have to be cycled for everyone remaining with an organization when a key employee having a deep level of access goes elsewhere.

Jeff Relkin has 30+ years of technology-based experience at several Fortune 500 corporations as a developer, consultant, and manager. He has also been an adjunct professor in the master's program at Manhattanville College. At present, he's the CIO of the Millennium Challenge Corporation (MCC), a federal government agency located in Washington, DC. The views expressed in this article do not necessarily represent the views of MCC or the United States of America.

(http://articles.techrepublic.com.com/5102-1009-6112847.html)

Tuesday, September 12, 2006

Thread Dump JSP in Java 5 by Dr. Heinz M. Kabutz (JDK version: JDK 1.5 )

Abstract:
Sometimes it is useful to have a look at what the threads are doing in a light weight fashion in order to discover tricky bugs and bottlenecks. Ideally this should not disturb the performance of the running system. In addition, it should be universally usable and cost nothing. Have a look at how we do it in this newsletter.


Thread Dump JSP in Java 5

Ten days ago, I received a desperate phone call from a large company in Cape Town. Their Java system tended to become unstable after some time, and especially during peak periods. Since the users were processing millions of dollars on this system, they should be able to log in at any time.

We managed to solve their problem. As you probably guessed, it was due to incorrectly handled concurrency. I cannot divulge how we find such problems or how to fix it, that is our competitive advantage. Contact me offlist [http://www.javaspecialists.co.za/contact.jsp] if your company has bugs or performance issues that you cannot solve and where an extra pair of eyes can be useful.

One of the measurements we looked at was to inspect what the threads were doing. In this case, it did not reveal much, but it can be of great value in finding other issues. For example, at another customer, we stumbled upon an infinite loop by looking at what the threads were up to.

There are several ways of doing that. If you are using Unix, you can send a "kill -3" to the process. With Windows, CTRL+Break on the console will give you that information.

This server was running on Windows (don't laugh). The application server did not allow us to start the JVM in a console window, which meant that we could not press CTRL+Break.

Another approach would have been to painstakingly step through the threads with JConsole. That was not an option to me.

One of the annoying parts of the typical thread dump is that the threads are not sorted, so it becomes a bit tricky to group them. It would also be nice to see a summary of the state in a table, to make it easier to find problems. In addition, we should be able to copy the text and diff it to see how things change between refreshes.

In good OO fashion, we separate model and view. Let's first define the model:

package com.cretesoft.tjsn.performance;

import java.io.Serializable;
import java.util.*;

public class ThreadDumpBean implements Serializable {
private final Map traces;

public ThreadDumpBean() {
traces = new TreeMap(THREAD_COMP);
traces.putAll(Thread.getAllStackTraces());
}

public Collection getThreads() {
return traces.keySet();
}

public Map getTraces() {
return traces;
}

/**
* Compare the threads by name and id.
*/

private static final Comparator THREAD_COMP =
new Comparator() {
public int compare(Thread o1, Thread o2) {
int result = o1.getName().compareTo(o2.getName());
if (result == 0) {
Long id1 = o1.getId();
Long id2 = o2.getId();
return id1.compareTo(id2);
}
return result;
}
};
}

We also write a bit of JSP, making use of the Expression Language ${}.

    <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>

Thread Summary:
















Thread State Priority Daemon
${thr.name}




${thr.state}




${thr.priority}




${thr.daemon}






Stack Trace of JVM:

${trace.key}

at ${traceline}

This will generate first a summary of the threads with hyperlinks to the individual stack traces. The summary shows the state of each thread.

Have a look at a sample snapshot [http://www.javaspecialists.co.za/samples/sample.htm(l)] .

A word of warning: you should not make this JSP page publicly accessible on your server, as it opens up the guts of the app server to the general population. Even if they cannot change the state of the threads, it might be bad enough for them to see what methods are being executed.

You might have to change the GET_STACK_TRACE_PERMISSION setting to allow yourself access into the application server.

This is a took that I will keep handy whenever I do performance tuning or difficult bug fixing on a J2EE server running Java 5.

I look forward to hearing from you how you expanded this idea to make it even more useful :)

For those of you lucky enough to be at the JavaZone conference this week, do come and say "hi" to me :) It would be great to meet you.

Kind regards

Heinz


(http://www.javaspecialists.co.za/archive/newsletter.do?issue=132&print=yes&locale=en_US)

Copying Files from the Internet by Dr. Heinz M. Kabutz (for JDK version: JDK 1.5)

Abstract:
Sometimes you need to download files using HTTP from a machine that you cannot run a browser on. In this simple Java program we show you how this is done. We include information of your progress for those who are impatient, and look at how the volatile keyword can be used.


Copying Files from the Internet

Part of the job of installing our own dedicated server involves downloading software from the internet onto our machine. I did not want to punch a hole in my router to allow me to open up an X session onto the server. Considering my slow internet connection, I also did not want to first download the files onto my machine, then upload onto the server.

A technique that I have used many times for downloading files from the internet is to open up a URL, grap the bytes, and add them to a local file. Here is a small program that does this for you. You can specify any URL, and it will fetch the file from the internet for you and show you the progress.

You can either specify the URL and the destination filename or let the Sucker work that out for himself.

Some URLs can tell you how many bytes the content is, others do not reveal that information. I use the Strategy Pattern to differentiate between the two. We have a top level Strategy class called Stats and two implementations, BasicStats and ProgressStats.

The stats are displayed in a background thread. This means that the Stats class has to ensure that changes to the fields are visible to the background thread.

In my System.out.println(), I output a new Date() to show the progress of the download. This is usually a bad practice. It would be better to use the DateFormat to reduce the amount of processing that needs to be done to display the date.

The last comment about this class is the size of the buffer. At the moment it is set to 1MB. This is larger than necessary, so actual length will often be much smaller.

import java.io.*;
import java.net.*;
import java.util.*;

public class Sucker {
private final String outputFile;
private final Stats stats;
private final URL url;

public Sucker(String path, String outputFile) throws IOException {
this.outputFile = outputFile;
System.out.println(new Date() + " Constructing Sucker");
url = new URL(path);
System.out.println(new Date() + " Connected to URL");
stats = Stats.make(url);
}

public Sucker(String path) throws IOException {
this(path, path.replaceAll(".*\\/", ""));
}

private void downloadFile() throws IOException {
Timer timer = new Timer();
timer.schedule(new TimerTask() {
public void run() {
stats.print();
}
}, 1000, 1000);

try {
System.out.println(new Date() + " Opening Streams");
InputStream in = url.openStream();
OutputStream out = new FileOutputStream(outputFile);
System.out.println(new Date() + " Streams opened");

byte[] buf = new byte[1024 * 1024];
int length;
while ((length = in.read(buf)) != -1) {
out.write(buf, 0, length);
stats.bytes(length);
}
in.close();
out.close();
} finally {
timer.cancel();
stats.print();
}
}

private static void usage() {
System.out.println("Usage: java Sucker URL [targetfile]");
System.out.println("\tThis will download the file at the URL " +
"to the targetfile location");
System.exit(1);
}

public static void main(String[] args) throws IOException {
Sucker sucker;
switch (args.length) {
case 1: sucker = new Sucker(args[0]); break;
case 2: sucker = new Sucker(args[0], args[1]); break;
default: usage(); return;
}
sucker.downloadFile();
}
}

The Stats class needs a little bit of explaining. The field totalBytes is written to by one thread, and read from by another. Since we are writing with only one thread, we can get away with just making the field volatile. We have to make it at least volatile to ensure that the timer thread can see our changes.

The printf() statement "%10dKB%5s%% (%d KB/s)%n" looks beautiful, does it not? The %10d means a decimal number with 10 places, right justified. The "KB" stands for kilobytes. The %5s means a String with 5 spaces, right justified. Then we have a %%, which represents the % sign. The newline is done with %n. Cryptic I know, but for experienced C programmers this should read like poetry :-)

The Stats class contains a factory method that returns a different strategy, depending on whether the content length is known. Having the factory method inside Stats allows us to introduce new types of Stats without modifying the context class, in this case Sucker.

import java.net.*;
import java.io.IOException;
import java.util.Date;

public abstract class Stats {
private volatile int totalBytes;
private long start = System.currentTimeMillis();
public int seconds() {
int result = (int) ((System.currentTimeMillis() - start) / 1000);
return result == 0 ? 1 : result; // avoid div by zero
}
public void bytes(int length) {
totalBytes += length;
}
public void print() {
int kbpersecond = (int) (totalBytes / seconds() / 1024);
System.out.printf("%10d KB%5s%% (%d KB/s)%n", totalBytes/1024,
calculatePercentageComplete(totalBytes), kbpersecond);
}

public abstract String calculatePercentageComplete(int bytes);

public static Stats make(URL url) throws IOException {
System.out.println(new Date() + " Opening connection to URL");
URLConnection con = url.openConnection();
System.out.println(new Date() + " Getting content length");
int size = con.getContentLength();
return size == -1 ? new BasicStats() : new ProgressStats(size);
}
}

The ProgressStats class is used when we know the content length of the URL, otherwise BasicStats is used.

public class ProgressStats extends Stats {
private final long contentLength;
public ProgressStats(long contentLength) {
this.contentLength = contentLength;
}
public String calculatePercentageComplete(int totalBytes) {
return Long.toString((totalBytes * 100L / contentLength));
}
}

public class BasicStats extends Stats {
public String calculatePercentageComplete(int totalBytes) {
return "???";
}
}

Let's run the Sucker class. To download a picture of me at the Tsinghua University in China, you would do the following:

java Sucker http://www.javaspecialists.co.za/pics/TsinghuaClass.jpg

which produces the following output on my slow connection to the internet:

    Wed Mar 08 12:24:27 GMT+02:00 2006 Constructing Sucker
Wed Mar 08 12:24:27 GMT+02:00 2006 Connected to URL
Wed Mar 08 12:24:27 GMT+02:00 2006 Opening connection to URL
Wed Mar 08 12:24:27 GMT+02:00 2006 Getting content length
Wed Mar 08 12:24:27 GMT+02:00 2006 Opening Streams
Wed Mar 08 12:24:28 GMT+02:00 2006 Streams opened
6 KB 2% (6 KB/s)
56 KB 17% (28 KB/s)
104 KB 32% (34 KB/s)
158 KB 49% (39 KB/s)
203 KB 63% (40 KB/s)
257 KB 79% (42 KB/s)
295 KB 91% (42 KB/s)
322 KB 100% (46 KB/s)

When I tried downloading the latest Tomcat version from my server, the speed was far more acceptable:

    Wed Mar 08 11:25:52 CET 2006 Constructing Sucker
Wed Mar 08 11:25:52 CET 2006 Connected to URL
Wed Mar 08 11:25:52 CET 2006 Opening connection to URL
Wed Mar 08 11:25:52 CET 2006 Getting content length
Wed Mar 08 11:25:57 CET 2006 Opening Streams
Wed Mar 08 11:25:58 CET 2006 Streams opened
1056 KB 18% (1056 KB/s)
2272 KB 38% (1136 KB/s)
3200 KB 54% (1066 KB/s)
4121 KB 70% (1030 KB/s)
5200 KB 89% (1040 KB/s)
5829 KB 100% (1165 KB/s)

There are ways of running this through a proxy as well, which you apparently do like this (according to my friends Pat Cousins and Leon Swanepoel):

    System.getProperties().put("proxySet", "true");
System.getProperties().put("proxyHost", "193.41.31.2");
System.getProperties().put("proxyPort", "8080");

If you need to supply a password, you can do that by changing the authenticator:

    Authenticator.setDefault(new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(
"username", "password".toCharArray());
}
});

I have not tried this out myself, so use at own risk :)

That is all for this week. Thank you for your continued support by reading this newsletter, and forwarding it to your friends :)

Kind regards

Heinz


(http://www.javaspecialists.co.za/archive/newsletter.do?issue=122&print=yes&locale=en_US)

Monday, September 11, 2006

Compter Technology Explained.

Sending Emails from Java by Dr. Heinz M. Kabutz (for JDK version: JDK 1.5)

Abstract:
In this newsletter, we show how simple it is to send emails from Java. This should obviously not be used for sending unsolicited emails, but will nevertheless illustrate why we are flooded with SPAM.


Welcome to the 131st edition of The Java(tm) Specialists' Newsletter. This will be one of my last newsletters sent from South Africa, as we are moving to Greece in October. I sold my trusty Alfa Romeo 156 Twin Spark last week. At least she (sniff sniff) was sold to a good friend.

Since the last newsletter, our children count has increased by 50%. We are grateful for a safe arrival. Have a look at the announcement :) [http://heinz.blog-city.com/new_arrival_of_heinz_clone_3.htm]

Here is another copy of the quiz that I sent last month, which less than 25% got right. Even if you cannot make it to Oslo in September for my tutorial, have a look if you know the answer:

import java.util.*;
public class Conference {
private Collection delegates = new ArrayList();
public void add(String... names) {
Collections.addAll(delegates, names);
}
public void removeFirst() {
delegates.remove(0);
}
public String toString() {
return "Conference " + delegates;
}
public static void main(String[] args) {
Conference sun_tech_days = new Conference();
sun_tech_days.add("Herman", "Bobby", "Robert");
sun_tech_days.removeFirst();
System.out.println(sun_tech_days);
}
}

Sending Emails from Java

Sometimes we need to send an email to a group of friends to announce some event (birth of child, move to Greece, farewell party). Due to the scourge of SPAM, we have to be careful how we do this, otherwise our email will be caught in the net and the other party will not see it. Over the years of publishing this email newsletter, I have discovered several things:

  • Do not SPAM.
  • Don't start an email with "Dear ..."
  • If possible, avoid HTML tags. Text is best.
  • Definitely avoid JavaScript.
  • Don't send an email to 100 people by putting their addresses in the "TO", "CC" or "BCC" fields.
  • Use a SMTP server on a static IP address.
  • Do not SPAM.
  • Let us imagine that I want to invite 30 friends for a "braai", which is a South African version of the barbeque. It works a bit differently here. First off, when we say: come at 18:00, we mean 20:00. And if you do come at 20:00, don't expect the fire to have started yet. Another curious feature is that it is fairly common to ask your guests to bring their own meat and drinks. This way, the braai scales better. So here is my invitation, "braai.txt", where the first line is the subject:

        Invitation to Braai 5th August     We are having a braai at our house on the 5th of August at 18:00     to celebrate the birth of our daughter, Evangeline Kineta Kabutz.     Be there or be square.  Bring own meat and drinks.  We will     provide the salads, the fire and the music.      Heinz + Helene   

    Of course, we also need a list of email addresses that we can send the invitation to. These can be in various formats, but the one that I prefer is "FirstName Surname ". Here is the start of my file "addresses.txt":

        Heinz Kabutz      Peter East      Bad Name    

    We need to create a utility class called FileCollection, before we delve into the emailing. The FileCollection is a Collection of Strings that pulls in the contents of a text file at start up and contains all the lines as elements.

    package com.cretesoft.mailer;  import java.util.*; import java.io.*;  public class FileCollection extends ArrayList {   public FileCollection(String filename) throws IOException {     BufferedReader in = new BufferedReader(new FileReader(filename));     String s;     while ((s = in.readLine()) != null) {       add(s);     }     in.close();   } }   

    We need another utility class called MessageProvider, which extracts the subject and the message body from a file:

    package com.cretesoft.mailer;  import java.io.*; import java.util.*;  public class MessageProvider {   private final String subject;   private final String content;    public MessageProvider(String filename) throws IOException {     Iterator lines = new FileCollection(filename).iterator();     subject = lines.next();     StringBuilder cb = new StringBuilder();     while(lines.hasNext()) {       cb.append(lines.next());       cb.append('\n');     }     content = cb.toString();   }    public final String getSubject() {     return subject;   }    public final String getContent() {     return content;   } }   

    Now comes the tricky part of deciding which SMTP server to use. If you travel alot, or move between ISPs and networks, you should use a server that allows you to authenticate yourself. Sending emails from your own machine, especially if you have a dynamic IP address, is almost guaranteed to land you in the SPAM bin. You will have to make your own arrangements with your ISP to find out what the SMTP server settings are.

    The MailSender class is currently hardcoded with my own settings which you need to replace with your own. All it does is create a transport for SMTP and then allow you to send the message to an email address.

    package com.cretesoft.mailer;  import javax.mail.*; import javax.mail.internet.*; import java.util.*;  public class MailSender {   private static final String SMTP_SERVER =       "smtp.javaspecialists.co.za";   private static final String USERNAME =       "heinz@javaspecialists.co.za";   private static final String PASSWORD = "some_password";   private static final String FROM =       "Dr Heinz M. Kabutz ";   private static final String mailer = "TJSNMailer";    private final Transport transport;   private final Session session;   private final MessageProvider provider;    public MailSender(MessageProvider provider)       throws MessagingException {     this.provider = provider;     Properties props = System.getProperties();     props.put("mail.smtp.host", SMTP_SERVER);     props.put("mail.smtp.auth", "true");     // Get a Session object     session = Session.getInstance(props, null);     transport = session.getTransport("smtp");     transport.connect(SMTP_SERVER, USERNAME, PASSWORD);   }    public void sendMessageTo(String to) throws MessagingException {     Message msg = new MimeMessage(session);     // set headers     msg.setFrom(InternetAddress.parse(FROM, false)[0]);     msg.setHeader("X-Mailer", mailer);     msg.setSentDate(new Date());     msg.setRecipients(Message.RecipientType.TO,         InternetAddress.parse(to, false));      // set title and body     msg.setSubject(provider.getSubject());     msg.setText(provider.getContent());      // off goes the message...     transport.sendMessage(msg, msg.getAllRecipients());   } }   

    Depending on how reliable your SMTP server is, you might need to build in some retries into the sendMessageTo() method.

    Lastly we have the Mailer class:

    package com.cretesoft.mailer;  import javax.mail.MessagingException; import java.io.IOException;  public class Mailer {   private final FileCollection to;   private final MessageProvider provider;   public Mailer(String addressFile, String messageFile)       throws IOException {     to = new FileCollection(addressFile);     provider = new MessageProvider(messageFile);   }    public void sendMessages() throws MessagingException {     MailSender sender = new MailSender(provider);     for (String email : to) {       sender.sendMessageTo(email);       System.out.println("Mail sent to " + email);     }   }    public static void main(String[] args) throws Exception {     if (args.length != 2) {       System.err.println(           "Usage: java Mailer address_file message_file");       System.exit(1);     }      long time = -System.currentTimeMillis();     Mailer sender = new Mailer(args[0], args[1]);     sender.sendMessages();     time += System.currentTimeMillis();     System.out.println(time + "ms");     System.out.println("Finished");   } }   

    When we run this (with the correct password), we get:

        Mail sent to Heinz Kabutz      Mail sent to John Smith      Mail sent to Bad Name      17749ms     Finished   

    Application of Mailer

    I use this mailer in several applications. For example, when you fill in our enquiry form [http://www.javaspecialists.co.za/enquiry.jsp?code=general] it sends me a lovely email listing what you have filled in. In that case, I am sending an HTML email to myself. Using HTML looks smarter, but might get caught up in a SPAM net.

    Another application for sending emails is when an exception occurs on some critical applications. This way, we immediately know when a problem has occurred.

    Enhancements

    Besides retrying to establish transport connections, you can also improve the program by using multi-threading to create several concurrent connections to your ISP's SMTP server. Your ISP might not allow that. Infact, they might black-list you if you send too many emails sequentially. However, if it does allow you, you will get a great performance improvement.

    Kind regards

    Heinz

    (http://www.javaspecialists.co.za/archive/newsletter.do?issue=131&locale=en_US)

    Sunday, September 03, 2006

    The following UML class diagram illustrates a part of this data access exception hierarchy.(Spring Frame Work)

    Introduction to the Spring Framework By Rod Johnson

    Since the first version of this article was published in October, 2003, the Spring Framework has steadily grown in popularity. It has progressed through version 1.0 final to the present 1.2, and has been adopted in a wide range of industries and projects. In this article, I'll try to explain what Spring sets out to achieve, and how I believe it can help you to develop J2EE applications.

    Yet another framework?

    You may be thinking "not another framework." Why should you read this article, or download the Spring Framework (if you haven't already), when there are so many J2EE frameworks, or when you could build your own framework? The sustained high level of interest in the community is one indication that Spring must offer something valuable; there are also numerous technical reasons.

    I believe that Spring is unique, for several reasons:

    • It addresses important areas that many other popular frameworks don't. Spring focuses around providing a way to manage your business objects.
    • Spring is both comprehensive and modular. Spring has a layered architecture, meaning that you can choose to use just about any part of it in isolation, yet its architecture is internally consistent. So you get maximum value from your learning curve. You might choose to use Spring only to simplify use of JDBC, for example, or you might choose to use Spring to manage all your business objects. And it's easy to introduce Spring incrementally into existing projects.
    • Spring is designed from the ground up to help you write code that's easy to test. Spring is an ideal framework for test driven projects.
    • Spring is an increasingly important integration technology, its role recognized by several large vendors.

    Spring is not necessarily one more framework dependency for your project. Spring is potentially a one-stop shop, addressing most infrastructure concerns of typical applications. It also goes places other frameworks don't.

    An open source project since February 2003, Spring has a long heritage. The open source project started from infrastructure code published with my book, Expert One-on-One J2EE Design and Development, in late 2002. Expert One-on-One J2EE laid out the basic architectural thinking behind Spring. However, the architectural concepts go back to early 2000, and reflect my experience in developing infrastructure for a series of successful commercial projects.

    Since January 2003, Spring has been hosted on SourceForge. There are now 20 developers, with the leading contributors devoted full-time to Spring development and support. The flourishing open source community has helped it evolve into far more than could have been achieved by any individual.

    Architectural benefits of Spring

    Before we get down to specifics, let's look at some of the benefits Spring can bring to a project:

    • Spring can effectively organize your middle tier objects, whether or not you choose to use EJB. Spring takes care of plumbing that would be left up to you if you use only Struts or other frameworks geared to particular J2EE APIs. And while it is perhaps most valuable in the middle tier, Spring's configuration management services can be used in any architectural layer, in whatever runtime environment.
    • Spring can eliminate the proliferation of Singletons seen on many projects. In my experience, this is a major problem, reducing testability and object orientation.
    • Spring can eliminate the need to use a variety of custom properties file formats, by handling configuration in a consistent way throughout applications and projects. Ever wondered what magic property keys or system properties a particular class looks for, and had to read the Javadoc or even source code? With Spring you simply look at the class's JavaBean properties or constructor arguments. The use of Inversion of Control and Dependency Injection (discussed below) helps achieve this simplification.
    • Spring can facilitate good programming practice by reducing the cost of programming to interfaces, rather than classes, almost to zero.
    • Spring is designed so that applications built with it depend on as few of its APIs as possible. Most business objects in Spring applications have no dependency on Spring.
    • Applications built using Spring are very easy to unit test.
    • Spring can make the use of EJB an implementation choice, rather than the determinant of application architecture. You can choose to implement business interfaces as POJOs or local EJBs without affecting calling code.
    • Spring helps you solve many problems without using EJB. Spring can provide an alternative to EJB that's appropriate for many applications. For example, Spring can use AOP to deliver declarative transaction management without using an EJB container; even without a JTA implementation, if you only need to work with a single database.
    • Spring provides a consistent framework for data access, whether using JDBC or an O/R mapping product such as TopLink, Hibernate or a JDO implementation.
    • Spring provides a consistent, simple programming model in many areas, making it an ideal architectural "glue." You can see this consistency in the Spring approach to JDBC, JMS, JavaMail, JNDI and many other important APIs.

    Spring is essentially a technology dedicated to enabling you to build applications using POJOs. This desirable goal requires a sophisticated framework, which conceals much complexity from the developer.

    Thus Spring really can enable you to implement the simplest possible solution to your problems. And that's worth a lot.

    What does Spring do?

    Spring provides a lot of functionality, so I'll quickly review each major area in turn.

    Mission statement

    Firstly, let's be clear on Spring's scope. Although Spring covers a lot of ground, we have a clear vision as to what it should and shouldn't address.

    Spring's main aim is to make J2EE easier to use and promote good programming practice. It does this by enabling a POJO-based programming model that is applicable in a wide range of environments.

    Spring does not reinvent the wheel. Thus you'll find no logging packages in Spring, no connection pools, no distributed transaction coordinator. All these things are provided by open source projects (such as Commons Logging, which we use for all our log output, or Commons DBCP), or by your application server. For the same reason, we don't provide an O/R mapping layer. There are good solutions to this problem such as TopLink, Hibernate and JDO.

    Spring does aim to make existing technologies easier to use. For example, although we are not in the business of low-level transaction coordination, we do provide an abstraction layer over JTA or any other transaction strategy.

    Spring doesn't directly compete with other open source projects unless we feel we can provide something new. For example, like many developers, we have never been happy with Struts, and felt that there was room for improvement in MVC web frameworks. (With Spring MVC adoption growing rapidly, it seems that many agree with us.) In some areas, such as its lightweight IoC container and AOP framework, Spring does have direct competition, but Spring was a pioneer in those areas.

    Spring benefits from internal consistency. All the developers are singing from the same hymn sheet, the fundamental ideas remaining faithful to those of Expert One-on-One J2EE Design and Development. And we've been able to use some central concepts, such as Inversion of Control, across multiple areas.

    Spring is portable between application servers. Of course ensuring portability is always a challenge, but we avoid anything platform-specific or non-standard in the developer's view, and support users on WebLogic, Tomcat, Resin, JBoss, Jetty, Geronimo, WebSphere and other application servers. Spring's non-invasive, POJO, approach enables us to take advantage of environment-specific features without sacrificing portability, as in the case of enhanced WebLogic transaction management functionality in Spring 1.2 that uses BEA proprietary APIs under the covers.

    Inversion of control container

    The core of Spring is the org.springframework.beans package, designed for working with JavaBeans. This package typically isn't used directly by users, but underpins much Spring functionality.

    The next higher layer of abstraction is the bean factory. A Spring bean factory is a generic factory that enables objects to be retrieved by name, and which can manage relationships between objects.

    Bean factories support two modes of object:

    • Singleton: in this case, there's one shared instance of the object with a particular name, which will be retrieved on lookup. This is the default, and most often used, mode. It's ideal for stateless service objects.
    • Prototype or non-singleton: in this case, each retrieval will result in the creation of an independent object. For example, this could be used to allow each caller to have its own distinct object reference.

    Because the Spring container manages relationships between objects, it can add value where necessary through services such as transparent pooling for managed POJOs, and support for hot swapping, where the container introduces a level of indirection that allows the target of a reference to be swapped at runtime without affecting callers and without loss of thread safety. One of the beauties of Dependency Injection (discussed shortly) is that all this is possible transparently, with no API involved.

    As org.springframework.beans.factory.BeanFactory is a simple interface, it can be implemented in different ways. The BeanDefinitionReader interface separates the metadata format from BeanFactory implementations themselves, so the generic BeanFactory implementations Spring provides can be used with different types of metadata. You could easily implement your own BeanFactory or BeanDefinitionReader, although few users find a need to. The most commonly used BeanFactory definitions are:

    • XmlBeanFactory. This parses a simple, intuitive XML structure defining the classes and properties of named objects. We provide a DTD to make authoring easier.
    • DefaultListableBeanFactory: This provides the ability to parse bean definitions in properties files, and create BeanFactories programmatically.

    Each bean definition can be a POJO (defined by class name and JavaBean initialisation properties or constructor arguments), or a FactoryBean. The FactoryBean interface adds a level of indirection. Typically this is used to create proxied objects using AOP or other approaches: for example, proxies that add declarative transaction management. This is conceptually similar to EJB interception, but works out much simpler in practice, and is more powerful.

    BeanFactories can optionally participate in a hierarchy, "inheriting" definitions from their ancestors. This enables the sharing of common configuration across a whole application, while individual resources such as controller servlets also have their own independent set of objects.

    This motivation for the use of JavaBeans is described in Chapter 4 of Expert One-on-One J2EE Design and Development, which is available on the ServerSide as a free PDF (/articles/article.tss?l=RodJohnsonInterview).

    Through its bean factory concept, Spring is an Inversion of Control container. (I don't much like the term container, as it conjures up visions of heavyweight containers such as EJB containers. A Spring BeanFactory is a container that can be created in a single line of code, and requires no special deployment steps.) Spring is most closely identified with a flavor of Inversion of Control known as Dependency Injection--a name coined by Martin Fowler, Rod Johnson and the PicoContainer team in late 2003.

    The concept behind Inversion of Control is often expressed in the Hollywood Principle: "Don't call me, I'll call you." IoC moves the responsibility for making things happen into the framework, and away from application code. Whereas your code calls a traditional class library, an IoC framework calls your code. Lifecycle callbacks in many APIs, such as the setSessionContext() method for session EJBs, demonstrate this approach.

    Dependency Injection is a form of IoC that removes explicit dependence on container APIs; ordinary Java methods are used to inject dependencies such as collaborating objects or configuration values into application object instances. Where configuration is concerned this means that while in traditional container architectures such as EJB, a component might call the container to say "where's object X, which I need to do my work", with Dependency Injection the container figures out that the component needs an X object, and provides it to it at runtime. The container does this figuring out based on method signatures (usually JavaBean properties or constructors) and, possibly, configuration data such as XML.

    The two major flavors of Dependency Injection are Setter Injection (injection via JavaBean setters); and Constructor Injection (injection via constructor arguments). Spring provides sophisticated support for both, and even allows you to mix the two when configuring the one object.

    As well as supporting all forms of Dependency Injection, Spring also provides a range of callback events, and an API for traditional lookup where necessary. However, we recommend a pure Dependency Injection approach in general.

    Dependency Injection has several important benefits. For example:

    • Because components don't need to look up collaborators at runtime, they're much simpler to write and maintain. In Spring's version of IoC, components express their dependency on other components via exposing JavaBean setter methods or through constructor arguments. The EJB equivalent would be a JNDI lookup, which requires the developer to write code that makes environmental assumptions.
    • For the same reasons, application code is much easier to test. For example, JavaBean properties are simple, core Java and easy to test: simply write a self-contained JUnit test method that creates the object and sets the relevant properties.
    • A good IoC implementation preserves strong typing. If you need to use a generic factory to look up collaborators, you have to cast the results to the desired type. This isn't a major problem, but it is inelegant. With IoC you express strongly typed dependencies in your code and the framework is responsible for type casts. This means that type mismatches will be raised as errors when the framework configures the application; you don't have to worry about class cast exceptions in your code.
    • Dependencies are explicit. For example, if an application class tries to load a properties file or connect to a database on instantiation, the environmental assumptions may not be obvious without reading the code (complicating testing and reducing deployment flexibility). With a Dependency Injection approach, dependencies are explicit, and evident in constructor or JavaBean properties.
    • Most business objects don't depend on IoC container APIs. This makes it easy to use legacy code, and easy to use objects either inside or outside the IoC container. For example, Spring users often configure the Jakarta Commons DBCP DataSource as a Spring bean: there's no need to write any custom code to do this. We say that an IoC container isn't invasive: using it won't invade your code with dependency on its APIs. Almost any POJO can become a component in a Spring bean factory. Existing JavaBeans or objects with multi-argument constructors work particularly well, but Spring also provides unique support for instantiating objects from static factory methods or even methods on other objects managed by the IoC container.

    This last point deserves emphasis. Dependency Injection is unlike traditional container architectures, such as EJB, in this minimization of dependency of application code on container. This means that your business objects can potentially be run in different Dependency Injection frameworks - or outside any framework - without code changes.

    In my experience and that of Spring users, it's hard to overemphasize the benefits that IoC--and, especially, Dependency Injection--brings to application code.

    Dependency Injection is not a new concept, although it's only recently made prime time in the J2EE community. There are alternative DI containers: notably, PicoContainer and HiveMind. PicoContainer is particularly lightweight and emphasizes the expression of dependencies through constructors rather than JavaBean properties. It does not use metadata outside Java code, which limits its functionality in comparison with Spring. HiveMind is conceptually more similar to Spring (also aiming at more than just IoC), although it lacks the comprehensive scope of the Spring project or the same scale of user community. EJB 3.0 will provide a basic DI capability as well.

    Spring BeanFactories are very lightweight. Users have successfully used them inside applets, as well as standalone Swing applications. (They also work fine within an EJB container.) There are no special deployment steps and no detectable startup time associated with the container itself (although certain objects configured by the container may of course take time to initialize). This ability to instantiate a container almost instantly in any tier of an application can be very valuable.

    The Spring BeanFactory concept is used throughout Spring, and is a key reason that Spring is so internally consistent. Spring is also unique among IoC containers in that it uses IoC as a basic concept throughout a full-featured framework.

    Most importantly for application developers, one or more BeanFactories provide a well-defined layer of business objects. This is analogous to, but much simpler (yet more powerful), than a layer of local session beans. Unlike EJBs, the objects in this layer can be interrelated, and their relationships managed by the owning factory. Having a well-defined layer of business objects is very important to a successful architecture.

    A Spring ApplicationContext is a subinterface of BeanFactory, which provides support for:

    • Message lookup, supporting internationalization
    • An eventing mechanism, allowing application objects to publish and optionally register to be notified of events
    • Automatic recognition of special application-specific or generic bean definitions that customize container behavior
    • Portable file and resource access
    XmlBeanFactory example

    Spring users normally configure their applications in XML "bean definition" files. The root of a Spring XML bean definition document is a element. The element contains one or more definitions. We normally specify the class and properties of each bean definition. We must also specify the id, which will be the name that we'll use this bean with in our code.

    Let's look at a simple example, which configures three application objects with relationships commonly seen in J2EE applications:

    • A J2EE DataSource
    • A DAO that uses the DataSource
    • A business object that uses the DAO in the course of its work

    In the following example, we use a BasicDataSource from the Jakarta Commons DBCP project. (ComboPooledDataSource from the C3PO project is also an excellent option.) BasicDataSource, like many other existing classes, can easily be used in a Spring bean factory, as it offers JavaBean-style configuration. The close method that needs to be called on shutdown can be registered via Spring's "destroy-method" attribute, to avoid the need for BasicDataSource to implement any Spring interface.



    class="org.apache.commons.dbcp.BasicDataSource"
    destroy-method="close">



    All the properties of BasicDataSource we're interested in are Strings, so we specify their values with the "value" attribute. (This shortcut was introduced in Spring 1.2. It's a convenient alternative to the subelement, which is usable even for values that are problematic in XML attributes.) Spring uses the standard JavaBean PropertyEditor mechanism to convert String representations to other types if necessary.

    Now we define the DAO, which has a bean reference to the DataSource. Relationships between beans are specified using the "ref" attribute or element:

        class="example.ExampleDataAccessObject">

    The business object has a reference to the DAO, and an int property (exampleParam). In this case, I've used the subelement syntax familiar to those who've used Spring prior to 1.2:

      class="example.ExampleBusinessObject">

    10


    Relationships between objects are normally set explicitly in configuration, as in this example. We consider this to be a Good Thing in most cases. However, Spring also provides what we call "autowire" support, where it figures out the dependencies between beans. The limitation with this - as with PicoContainer - is that if there are multiple beans of a particular type it's impossible to work out which instance a dependency of that type should be resolved to. On the positive side, unsatisfied dependencies can be caught when the factory is initialized. (Spring also offers an optional dependency check for explicit configuration, which can achieve this goal.)

    We could use the autowire feature as follows in the above example, if we didn't want to code these relationships explicitly:

     class="example.ExampleBusinessObject"
    autowire="byType">


    With this usage, Spring will work out that the dataSource property of exampleBusinessObject should be set to the implementation of DataSource it finds in the present BeanFactory. It's an error if there is none, or more than one, bean of the required type in the present BeanFactory. We still need to set the exampleParam property, as it's not a reference.

    Autowire support has the advantage of reducing the volume of configuration. It also means that the container can learn about application structure using reflection, so if you add an additional constructor argument of JavaBean property, it may be successfully populated without any need to change configuration. The tradeoffs around autowiring need to be evaluated carefully.

    Externalizing relationships from Java code has an enormous benefit over hard coding it, as it's possible to change the XML file without changing a line of Java code. For example, we could simply change the myDataSource bean definition to refer to a different bean class to use an alternative connection pool, or a test data source. We could use Spring's JNDI location FactoryBean to get a datasource from an application server in a single alternative XML stanza, as follows. There would be no impact on Java code or any other bean definitions.

     class="org.springframework.jndi.JndiObjectFactoryBean">

    Now let's look at the Java code for the example business object. Note that there are no Spring dependencies in the code listing below. Unlike an EJB container, a Spring BeanFactory is not invasive: you don't normally need to code awareness of it into application objects.

    public class ExampleBusinessObject implements MyBusinessObject {

    private ExampleDataAccessObject dao;
    private int exampleParam;

    public void setDataAccessObject(ExampleDataAccessObject dao) {
    this.dao = dao;
    }

    public void setExampleParam(int exampleParam) {
    this.exampleParam = exampleParam;
    }

    public void myBusinessMethod() {
    // do stuff using dao
    }
    }

    Note the property setters, which correspond to the XML references in the bean definition document. These are invoked by Spring before the object is used.

    Such application beans do not need to depend on Spring: They don't need to implement any Spring interfaces or extend Spring classes: they just need to observe JavaBeans naming convention. Reusing one outside of a Spring application context is easy, for example in a test environment. Just instantiate it with its default constructor, and set its properties manually, via setDataSource() and setExampleParam() calls. So long as you have a no-args constructor, you're free to define other constructors taking multiple properties if you want to support programmatic construction in a single line of code.

    Note that the JavaBean properties are not declared on the business interface callers will work with. They're an implementation detail. We can easily "plug in" different implementing classes that have different bean properties without affecting connected objects or calling code.

    Of course Spring XML bean factories have many more capabilities than described here, but this should give you a feel for the basic approach. As well as simple properties, and properties for which you have a JavaBeans PropertyEditor, Spring can handle lists, maps and java.util.Properties. Other advanced container capabilities include:

    • Inner beans, in which a property element contains an anonymous bean definition not visible at top-level scope
    • Post processors: special bean definitions that customize container behavior
    • Method Injection, a form of IoC in which the container implements an abstract method or overrides a concrete method to inject a dependency. This is a more rarely used form of Dependency Injection than Setter or Constructor Injection. However, it can be useful to avoid an explicit container dependency when looking up a new object instance for each invocation, or to allow configuration to vary over time--for example, with the method implementation being backed by a SQL query in one environment and a fil system read in another.

    Bean factories and application contexts are often associated with a scope defined by the J2EE server or web container, such as:

    • The Servlet context. In the Spring MVC framework, an application context is defined for each web application containing common objects. Spring provides the ability to instantiate such a context through a listener or servlet without dependence on the Spring MVC framework, so it can also be used in Struts, WebWork or other web frameworks.
    • A Servlet: Each controller servlet in the Spring MVC framework has its own application context, derived from the root (application-wide) application context. It's also easy to accomplish this with Struts or another MVC framework.
    • EJB: Spring provides convenience superclasses for EJB that simplify EJB authoring and provide a BeanFactory loaded from an XML document in the EJB Jar file.

    These hooks provided by the J2EE specification generally avoid the need to use a Singleton to bootstrap a bean factory.

    However, it's trivial to instantiate a BeanFactory programmatically if we wish. For example, we could create the bean factory and get a reference to the business object defined above in the following three lines of code:

    XmlBeanFactory bf = new XmlBeanFactory(new ClassPathResource("myFile.xml", getClass()));
    MyBusinessObject mbo = (MyBusinessObject) bf.getBean("exampleBusinessObject");

    This code will work outside an application server: it doesn't even depend on J2EE, as the Spring IoC container is pure Java. The Spring Rich project (a framework for simplifying the development of Swing applications using Spring) demonstrates how Spring can be used outside a J2EE environment, as do Spring's integration testing features, discussed later in this article. Dependency Injection and the related functionality is too general and valuable to be confined to a J2EE, or server-side, environment.

    JDBC abstraction and data access exception hierarchy

    Data access is another area in which Spring shines.

    JDBC offers fairly good abstraction from the underlying database, but is a painful API to use. Some of the problems include:

    • The need for verbose error handling to ensure that ResultSets, Statements and (most importantly) Connections are closed after use. This means that correct use of JDBC can quickly result in a lot of code. It's also a common source of errors. Connection leaks can quickly bring applications down under load.
    • The relatively uninformative SQLException. JDBC does not offer an exception hierarchy, but throws SQLException in response to all errors. Finding out what actually went wrong - for example, was the problem a deadlock or invalid SQL? - involves examining the SQLState value and error code. The meaning of these values varies between databases.

    Spring addresses these problems in two ways:

    • By providing APIs that move tedious and error-prone exception handling out of application code into the framework. The framework takes care of all exception handling; application code can concentrate on issuing the appropriate SQL and extracting results.
    • By providing a meaningful exception hierarchy for your application code to work with in place of SQLException. When Spring first obtains a connection from a DataSource it examines the metadata to determine the database product. It uses this knowledge to map SQLExceptions to the correct exception in its own hierarchy descended from org.springframework.dao.DataAccessException. Thus your code can work with meaningful exceptions, and need not worry about proprietary SQLState or error codes. Spring's data access exceptions are not JDBC-specific, so your DAOs are not necessarily tied to JDBC because of the exceptions they may throw.

    The following UML class diagram illustrates a part of this data access exception hierarchy, indicating its sophistication. Note that none of the exceptions shown here is JDBC-specific. There are JDBC-specific subclasses of some of these exceptions, but calling code is generally abstracted wholly away from dependence on JDBC: an essential if you wish to use truly API-agnostic DAO interfaces to hide your persistence strategy.

    Spring provides two levels of JDBC abstraction API. The first, in the org.springframework.jdbc.core package, uses callbacks to move control - and hence error handling and connection acquisition and release - from application code inside the framework. This is a different type of Inversion of Control, but equally valuable to that used for configuration management.

    Spring uses a similar callback approach to address several other APIs that involve special steps to acquire and cleanup resources, such as JDO (acquiring and relinquishing a PersistenceManager), transaction management (using JTA) and JNDI. Spring classes that perform such callbacks are called templates.

    For example, the Spring JdbcTemplate object can be used to perform a SQL query and save the results in a list as follows:

    JdbcTemplate template = new JdbcTemplate(dataSource);
    List names = template.query("SELECT USER.NAME FROM USER",
    new RowMapper() {
    public Object mapRow(ResultSet rs, int rowNum) throws SQLException;
    return rs.getString(1);
    }
    });

    The mapRow callback method will be invoked for each row of the ResultSet.

    Note that application code within the callback is free to throw SQLException: Spring will catch any exceptions and rethrow them in its own hierarchy. The application developer can choose which exceptions, if any, to catch and handle.

    The JdbcTemplate provides many methods to support different scenarios including prepared statements and batch updates. Simple tasks like running SQL functions can be accomplished without a callback, as follows. The example also illustrates the use of bind variables:

    int youngUserCount = template.queryForInt("SELECT COUNT(0) FROM USER WHERE USER.AGE < ?",
    new Object[] { new Integer(25) });

    The Spring JDBC abstraction has a very low performance overhead beyond standard JDBC, even when working with huge result sets. (In one project in 2004, we profiled the performance of a financial application performing up to 1.2 million inserts per transaction. The overhead of Spring JDBC was minimal, and the use of Spring facilitated the tuning of batch sizes and other parameters.)

    The higher level JDBC abstraction is in the org.springframework.jdbc.object package. This is built on the core JDBC callback functionality, but provides an API in which an RDBMS operation - whether query, update or stored procedure - is modelled as a Java object. This API was partly inspired by the JDO query API, which I found intuitive and highly usable.

    A query object to return User objects might look like this:

    class UserQuery extends MappingSqlQuery {

    public UserQuery(DataSource datasource) {
    super(datasource, "SELECT * FROM PUB_USER_ADDRESS WHERE USER_ID = ?");
    declareParameter(new SqlParameter(Types.NUMERIC));
    compile();
    }

    // Map a result set row to a Java object
    protected Object mapRow(ResultSet rs, int rownum) throws SQLException {
    User user = new User();
    user.setId(rs.getLong("USER_ID"));
    user.setForename(rs.getString("FORENAME"));
    return user;
    }

    public User findUser(long id) {
    // Use superclass convenience method to provide strong typing
    return (User) findObject(id);
    }
    }

    This class can be used as follows:

    User user = userQuery.findUser(25);

    Such objects are often inner classes inside DAOs. They are threadsafe, unless the subclass does something unusual.

    Another important class in the org.springframework.jdbc.object package is the StoredProcedure class. Spring enables a stored procedure to be proxied by a Java class with a single business method. If you like, you can define an interface that the stored procedure implements, meaning that you can free your application code from depending on the use of a stored procedure at all.

    The Spring data access exception hierarchy is based on unchecked (runtime) exceptions. Having worked with Spring on several projects I'm more and more convinced that this was the right decision.

    Data access exceptions not usually recoverable. For example, if we can't connect to the database, a particular business object is unlikely to be able to work around the problem. One potential exception is optimistic locking violations, but not all applications use optimistic locking. It's usually bad to be forced to write code to catch fatal exceptions that can't be sensibly handled. Letting them propagate to top-level handlers like the servlet or EJB container is usually more appropriate. All Spring data access exceptions are subclasses of DataAccessException, so if we do choose to catch all Spring data access exceptions, we can easily do so.

    Note that if we do want to recover from an unchecked data access exception, we can still do so. We can write code to handle only the recoverable condition. For example, if we consider that only an optimistic locking violation is recoverable, we can write code in a Spring DAO as follows:

    try {
    // do work
    }
    catch (OptimisticLockingFailureException ex) {
    // I'm interested in this
    }

    If Spring data access exceptions were checked, we'd need to write the following code. Note that we could choose to write this anyway:

    try {
    // do work
    }
    catch (OptimisticLockingFailureException ex) {
    // I'm interested in this
    }
    catch (DataAccessException ex) {
    // Fatal; just rethrow it
    }

    One potential objection to the first example - that the compiler can't enforce handling the potentially recoverable exception - applies also to the second. Because we're forced to catch the base exception (DataAccessException), the compiler won't enforce a check for a subclass (OptimisticLockingFailureException). So the compiler would force us to write code to handle an unrecoverable problem, but provide no help in forcing us to deal with the recoverable problem.

    Spring's use of unchecked data access exceptions is consistent with that of many - probably most - successful persistence frameworks. (Indeed, it was partly inspired by JDO.) JDBC is one of the few data access APIs to use checked exceptions. TopLink and JDO, for example, use unchecked exceptions exclusively. Hibernate switched from checked to unchecked exceptions in version 3.

    Spring JDBC can help you in several ways:

    • You'll never need to write a finally block again to use JDBC
    • Connection leaks will be a thing of the past
    • You'll need to write less code overall, and that code will be clearly focused on the necessary SQL
    • You'll never need to dig through your RDBMS documentation to work out what obscure error code it returns for a bad column name. Your application won't be dependent on RDBMS-specific error handling code.
    • Whatever persistence technology use, you'll find it easy to implement the DAO pattern without business logic depending on any particular data access API.
    • You'll benefit from improved portability (compared to raw JDBC) in advanced areas such as BLOB handling and invoking stored procedures that return result sets.

    In practice we find that all this amounts to substantial productivity gains and fewer bugs. I used to loathe writing JDBC code; now I find that I can focus on the SQL I want to execute, rather than the incidentals of JDBC resource management.

    Spring's JDBC abstraction can be used standalone if desired - you are not forced to use the other parts of Spring.

    O/R mapping integration

    Of course often you want to use O/R mapping, rather than use relational data access. Your overall application framework must support this also. Thus Spring integrates out of the box with Hibernate (versions 2 and 3), JDO (versions 1 and 2), TopLink and other ORM products. Its data access architecture allows it to integrate with any underlying data access technology. Spring and Hibernate are a particularly popular combination.

    Why would you use an ORM product plus Spring, instead of the ORM product directly? Spring adds significant value in the following areas:

    • Session management. Spring offers efficient, easy, and safe handling of units of work such as Hibernate or TopLink Sessions. Related code using the ORM tool alone generally needs to use the same "Session" object for efficiency and proper transaction handling. Spring can transparently create and bind a session to the current thread, using either a declarative, AOP method interceptor approach, or by using an explicit, "template" wrapper class at the Java code level. Thus Spring solves many of the usage issues that affect many users of ORM technology.
    • Resource management. Spring application contexts can handle the location and configuration of Hibernate SessionFactories, JDBC datasources, and other related resources. This makes these values easy to manage and change.
    • Integrated transaction management. Spring allows you to wrap your ORM code with either a declarative, AOP method interceptor, or an explicit 'template' wrapper class at the Java code level. In either case, transaction semantics are handled for you, and proper transaction handling (rollback, etc.) in case of exceptions is taken care of. As we discuss later, you also get the benefit of being able to use and swap various transaction managers, without your ORM-related code being affected. As an added benefit, JDBC-related code can fully integrate transactionally with ORM code, in the case of most supported ORM tools. This is useful for handling functionality not amenable to ORM.
    • Exception wrapping, as described above. Spring can wrap exceptions from the ORM layer, converting them from proprietary (possibly checked) exceptions, to a set of abstracted runtime exceptions. This allows you to handle most persistence exceptions, which are non-recoverable, only in the appropriate layers, without annoying boilerplate catches/throws, and exception declarations. You can still trap and handle exceptions anywhere you need to. Remember that JDBC exceptions (including DB specific dialects) are also converted to the same hierarchy, meaning that you can perform some operations with JDBC within a consistent programming model.
    • To avoid vendor lock-in. ORM solutions have different performance other characterics, and there is no perfect one size fits all solution. Alternatively, you may find that certain functionality is just not suited to an implemention using your ORM tool. Thus it makes sense to decouple your architecture from the tool-specific implementations of your data access object interfaces. If you may ever need to switch to another implementation for reasons of functionality, performance, or any other concerns, using Spring now can make the eventual switch much easier. Spring's abstraction of your ORM tool's Transactions and Exceptions, along with its IoC approach which allow you to easily swap in mapper/DAO objects implementing data-access functionality, make it easy to isolate all ORM-specific code in one area of your application, without sacrificing any of the power of your ORM tool. The PetClinic sample application shipped with Spring demonstrates the portability benefits that Spring offers, through providing variants that use JDBC, Hibernate, TopLink and Apache OJB to implement the persistence layer.
    • Ease of testing. Spring's inversion of control approach makes it easy to swap the implementations and locations of resources such as Hibernate session factories, datasources, transaction managers, and mapper object implementations (if needed). This makes it much easier to isolate and test each piece of persistence-related code in isolation.

    Above all, Spring facilitates a mix-and-match approach to data access. Despite the claims of some ORM vendors, ORM is not the solution to all problems, although it is a valuable productivity win in many cases. Spring enables a consistent architecture, and transaction strategy, even if you mix and match persistence approaches, even without using JTA.

    In cases where ORM is not ideally suited, Spring's simplified JDBC is not the only option: the "mapped statement" approach provided by iBATIS SQL Maps is worth a look. It provides a high level of control over SQL, while still automating the creation of mapped objects from query results. Spring integrates with SQL Maps out of the box. Spring's PetStore sample application illustrates iBATIS integration. Transaction management

    Abstracting a data access API is not enough; we also need to consider transaction management. JTA is the obvious solution, but it's a cumbersome API to use directly, and as a result many J2EE developers used to feel that EJB CMT is the only rational option for transaction management. Spring has changed that.

    Spring provides its own abstraction for transaction management. Spring uses this to deliver:

    • Programmatic transaction management via a callback template analogous to the JdbcTemplate, which is much easier to use than straight JTA
    • Declarative transaction management analogous to EJB CMT, but without the need for an EJB container. Actually, as we'll see, Spring's declarative transaction management capability is a semantically compatible superset of EJB CMT, with some unique and important benefits.

    Spring's transaction abstraction is unique in that it's not tied to JTA or any other transaction management technology. Spring uses the concept of a transaction strategy that decouples application code from the underlying transaction infrastructure (such as JDBC).

    Why should you care about this? Isn't JTA the best answer for all transaction management? If you're writing an application that uses only a single database, you don't need the complexity of JTA. You're not interested in XA transactions or two phase commit. You may not even need a high-end application server that provides these things. But, on the other hand, you don't want to have to rewrite your code should you ever have to work with multiple data sources.

    Imagine you decide to avoid the overhead of JTA by using JDBC or Hibernate transactions directly. If you ever need to work with multiple data sources, you'll have to rip out all that transaction management code and replace it with JTA transactions. This isn't very attractive and led most writers on J2EE, including myself, to recommend using global JTA transactions exclusively, effectively ruling out using a simple web container such as Tomcat for transactional applications. Using the Spring transaction abstraction, however, you only have to reconfigure Spring to use a JTA, rather than JDBC or Hibernate, transaction strategy and you're done. This is a configuration change, not a code change. Thus, Spring enables you to write applications that can scale down as well as up.

    AOP

    Since 2003 there has been much interest in applying AOP solutions to those enterprise concerns, such as transaction management, which have traditionally been addressed by EJB.

    The first goal of Spring's AOP support is to provide J2EE services to POJOs. Spring AOP is portable between application servers, so there's no risk of vendor lock in. It works in either web or EJB container, and has been used successfully in WebLogic, Tomcat, JBoss, Resin, Jetty, Orion and many other application servers and web containers.

    Spring AOP supports method interception. Key AOP concepts supported include:

    • Interception: Custom behaviour can be inserted before or after method invocations against any interface or class. This is similar to "around advice" in AspectJ terminology.
    • Introduction: Specifying that an advice should cause an object to implement additional interfaces. This can amount to mixin inheritance.
    • Static and dynamic pointcuts: Specifying the points in program execution at which interception should take place. Static pointcuts concern method signatures; dynamic pointcuts may also consider method arguments at the point where they are evaluated. Pointcuts are defined separately from interceptors, enabling a standard interceptor to be applied in different applications and code contexts.

    Spring supports both stateful (one instance per advised object) and stateless interceptors (one instance for all advice).

    Spring does not support field interception. This is a deliberate design decision. I have always felt that field interception violates encapsulation. I prefer to think of AOP as complementing, rather than conflicting with, OOP. In five or ten years time we will probably have travelled a lot farther on the AOP learning curve and feel comfortable giving AOP a seat at the top table of application design. (At that point language-based solutions such as AspectJ may be far more attractive than they are today.)

    Spring implements AOP using dynamic proxies (where an interface exists) or CGLIB byte code generation at runtime (which enables proxying of classes). Both these approaches work in any application server, or in a standalone environment.

    Spring was the first AOP framework to implement the AOP Alliance interfaces (www.sourceforge.net/projects/aopalliance). These represent an attempt to define interfaces allowing interoperability of interceptors between AOP frameworks.

    Spring integrates with AspectJ, providing the ability to seamlessly include AspectJ aspects into Spring applications . Since Spring 1.1 it has been possible to dependency inject AspectJ aspects using the Spring IoC container, just like any Java class. Thus AspectJ aspects can depend on any Spring-managed objects. The integration with the forthcoming AspectJ 5 release is still more exciting, with AspectJ set to provide the ability to dependency inject any POJO using Spring, based on an annotation-driven pointcut.

    Because Spring advises objects at instance, rather than class loader, level, it is possible to use multiple instances of the same class with different advice, or use unadvised instances along with advised instances.

    Perhaps the commonest use of Spring AOP is for declarative transaction management. This builds on the transaction abstraction described above, and can deliver declarative transaction management on any POJO. Depending on the transaction strategy, the underlying mechanism can be JTA, JDBC, Hibernate or any other API offering transaction management.

    The following are the key differences from EJB CMT:

    • Transaction management can be applied to any POJO. We recommend that business objects implement interfaces, but this is a matter of good programming practice, and is not enforced by the framework.
    • Programmatic rollback can be achieved within a transactional POJO through using the Spring transaction API. We provide static methods for this, using ThreadLocal variables, so you don't need to propagate a context object such as an EJBContext to ensure rollback.
    • You can define rollback rules declaratively. Whereas EJB will not automatically roll back a transaction on an uncaught application exception (only on unchecked exceptions, other types of Throwable and "system" exceptions), application developers often want a transaction to roll back on any exception. Spring transaction management allows you to specify declaratively which exceptions and subclasses should cause automatic rollback. Default behaviour is as with EJB, but you can specify automatic rollback on checked, as well as unchecked exceptions. This has the important benefit of minimizing the need for programmatic rollback, which creates a dependence on the Spring transaction API (as EJB programmatic rollback does on the EJBContext).
    • Because the underlying Spring transaction abstraction supports savepoints if they are supported by the underlying transaction infrastructure, Spring's declarative transaction management can support nested transactions, in addition to the propagation modes specified by EJB CMT (which Spring supports with identical semantics to EJB). Thus, for example, if you have doing JDBC operations on Oracle, you can use declarative nested transactions using Spring.
    • Transaction management is not tied to JTA. As explained above, Spring transaction management can work with different transaction strategies.

    It's also possible to use Spring AOP to implement application-specific aspects. Whether or not you choose to do this depends on your level of comfort with AOP concepts, rather than Spring's capabilities, but it can be very useful. Successful examples we've seen include:

    • Custom security interception, where the complexity of security checks required is beyond the capability of the standard J2EE security infrastructure. (Of course, before rolling your own security infrastructure, you should check the capabilities of Acegi Security for Spring, a powerful, flexible security framework that integrates with Spring using AOP, and reflects Spring's architectural approach.
    • Debugging and profiling aspects for use during development
    • Aspects that apply consistent exception handling policies in a single place
    • Interceptors that send emails to alert administrators or users of unusual scenarios

    Application-specific aspects can be a powerful way of removing the need for boilerplate code across many methods.

    Spring AOP integrates transparently with the Spring BeanFactory concept. Code obtaining an object from a Spring BeanFactory doesn't need to know whether or not it is advised. As with any object, the contract will be defined by the interfaces the object implements.

    The following XML stanza illustrates how to define an AOP proxy:

     class="org.springframework.aop.framework.ProxyFactoryBean">

    org.springframework.beans.ITestBean



    txInterceptor
    target


    Note that the class of the bean definition is always the AOP framework's ProxyFactoryBean, although the type of the bean as used in references or returned by the BeanFactory getBean() method will depend on the proxy interfaces. (Multiple proxy methods are supported.) The "interceptorNames" property of the ProxyFactoryBean takes a list of String. (Bean names must be used rather than bean references, as new instances of stateful interceptors may need to be created if the proxy is a "prototype", rather than a singleton bean definition.) The names in this list can be interceptors or pointcuts (interceptors and information about when they should apply). The "target" value in the list above automatically creates an "invoker interceptor" wrapping the target object. It is the name of a bean in the factory that implements the proxy interface. The myTest bean in this example can be used like any other bean in the bean factory. For example, other objects can reference it via elements and these references will be set by Spring IoC.

    There are a number of ways to set up proxying more concisely, if you don't need the full power of the AOP framework, such as using Java 5.0 annotations to drive transactional proxying without XML metadata, or the ability to use a single piece of XML to apply a consistent proxying strategy to many beans defined in a Spring factory.

    It's also possible to construct AOP proxies programmatically without using a BeanFactory, although this is more rarely used:

    TestBean target = new TestBean();
    DebugInterceptor di = new DebugInterceptor();
    MyInterceptor mi = new MyInterceptor();
    ProxyFactory factory = new ProxyFactory(target);
    factory.addInterceptor(0, di);
    factory.addInterceptor(1, mi);
    // An "invoker interceptor" is automatically added to wrap the target
    ITestBean tb = (ITestBean) factory.getProxy();

    We believe that it's generally best to externalize the wiring of applications from Java code, and AOP is no exception.

    The use of AOP as an alternative to EJB (version 2 or above) for delivering enterprise services is growing in importance. Spring has successfully demonstrated the value proposition.

    MVC web framework

    Spring includes a powerful and highly configurable MVC web framework.

    Spring's MVC model is most similar to that of Struts, although it is not derived from Struts. A Spring Controller is similar to a Struts Action in that it is a multithreaded service object, with a single instance executing on behalf of all clients. However, we believe that Spring MVC has some significant advantages over Struts. For example:

    • Spring provides a very clean division between controllers, JavaBean models, and views.
    • Spring's MVC is very flexible. Unlike Struts, which forces your Action and Form objects into concrete inheritance (thus taking away your single shot at concrete inheritance in Java), Spring MVC is entirely based on interfaces. Furthermore, just about every part of the Spring MVC framework is configurable via plugging in your own interface. Of course we also provide convenience classes as an implementation option.
    • Spring, like WebWork, provides interceptors as well as controllers, making it easy to factor out behavior common to the handling of many requests.
    • Spring MVC is truly view-agnostic. You don't get pushed to use JSP if you don't want to; you can use Velocity, XLST or other view technologies. If you want to use a custom view mechanism - for example, your own templating language - you can easily implement the Spring View interface to integrate it.
    • Spring Controllers are configured via IoC like any other objects. This makes them easy to test, and beautifully integrated with other objects managed by Spring.
    • Spring MVC web tiers are typically easier to test than Struts web tiers, due to the avoidance of forced concrete inheritance and explicit dependence of controllers on the dispatcher servlet.
    • The web tier becomes a thin layer on top of a business object layer. This encourages good practice. Struts and other dedicated web frameworks leave you on your own in implementing your business objects; Spring provides an integrated framework for all tiers of your application.

    As in Struts 1.1 and above, you can have as many dispatcher servlets as you need in a Spring MVC application.

    The following example shows how a simple Spring Controller can access business objects defined in the same application context. This controller performs a Google search in its handleRequest() method:

    public class GoogleSearchController
    implements Controller {

    private IGoogleSearchPort google;

    private String googleKey;

    public void setGoogle(IGoogleSearchPort google) {
    this.google = google;
    }

    public void setGoogleKey(String googleKey) {
    this.googleKey = googleKey;
    }

    public ModelAndView handleRequest(
    HttpServletRequest request, HttpServletResponse response)
    throws ServletException, IOException {
    String query = request.getParameter("query");
    GoogleSearchResult result =
    // Google property definitions omitted...

    // Use google business object
    google.doGoogleSearch(this.googleKey, query,
    start, maxResults, filter, restrict,
    safeSearch, lr, ie, oe);

    return new ModelAndView("googleResults", "result", result);
    }
    }

    In the prototype this code is taken from, IGoogleSearchPort is a GLUE web services proxy, returned by a Spring FactoryBean. However, Spring IoC isolates this controller from the underlying web services library. The interface could equally be implemented by a plain Java object, test stub, mock object, or EJB proxy, as discussed below. This controller contains no resource lookup; nothing except code necessary to support its web interaction.

    Spring also provides support for data binding, forms, wizards and more complex workflow. A forthcoming article in this series will discuss Spring MVC in detail.

    If your requirements are really complex, you should consider Spring Web Flow, a powerful framework that provides a higher level of abstraction for web flows than any traditional web MVC framework, and was discussed in a recent TSS article by its architect, Keith Donald.

    A good introduction to the Spring MVC framework is Thomas Risberg's Spring MVC tutorial (http://www.springframework.org/docs/MVC-step-by-step/Spring-MVC-step-by-step.html). See also "Web MVC with the Spring Framework" (http://www.springframework.org/docs/web_mvc.html).

    If you're happy with your favourite MVC framework, Spring's layered infrastructure allows you to use the rest of Spring without our MVC layer. We have Spring users who use Spring for middle tier management and data access but use Struts, WebWork, Tapestry or JSF in the web tier.

    Implementing EJBs

    If you choose to use EJB, Spring can provide important benefits in both EJB implementation and client-side access to EJBs.

    It's now widely regarded as a best practice to refactor business logic into POJOs behind EJB facades. (Among other things, this makes it much easier to unit test business logic, as EJBs depend heavily on the container and are hard to test in isolation.) Spring provides convenient superclasses for session beans and message driven beans that make this very easy, by automatically loading a BeanFactory based on an XML document included in the EJB Jar file.

    This means that a stateless session EJB might obtain and use a collaborator like this:

    import org.springframework.ejb.support.AbstractStatelessSessionBean;

    public class MyEJB extends AbstractStatelessSessionBean
    implements MyBusinessInterface {
    private MyPOJO myPOJO;

    protected void onEjbCreate() {
    this.myPOJO = getBeanFactory().getBean("myPOJO");
    }

    public void myBusinessMethod() {
    this.myPOJO.invokeMethod();
    }
    }

    Assuming that MyPOJO is an interface, the implementing class - and any configuration it requires, such as primitive properties and further collaborators - is hidden in the XML bean factory definition.

    We tell Spring where to load the XML document via an environment variable definition named ejb/BeanFactoryPath in the standard ejb-jar.xml deployment descriptor, as follows:


    myComponent
    com.test.ejb.myEjbBeanLocalHome
    com.mycom.MyComponentLocal
    com.mycom.MyComponentEJB
    Stateless
    Container


    ejb/BeanFactoryPath
    java.lang.String
    /myComponent-ejb-beans.xml


    The myComponent-ejb-beans.xml file will be loaded from the classpath: in this case, in the root of the EJB Jar file. Each EJB can specify its own XML document, so this mechanism can be used multiple times per EJB Jar file.

    The Spring superclasses implement EJB lifecycle methods such as setSessionContext() and ejbCreate(), leaving the application developer to optionally implement the Spring onEjbCreate() method.

    When EJB 3.0 is available in public draft, we will offer support for the use of the Spring IoC container to provide richer Dependency Injection semantics in that environment. We will also integrate the JSR-220 O/R mapping API with Spring as a supported data access API.

    Using EJBs

    Spring also makes it much easier to use, as well as implement EJBs. Many EJB applications use the Service Locator and Business Delegate patterns. These are better than spraying JNDI lookups throughout client code, but their usual implementations have significant disadvantages. For example:

    • Typically code using EJBs depends on Service Locator or Business Delegate singletons, making it hard to test.
    • In the case of the Service Locator pattern used without a Business Delegate, application code still ends up having to invoke the create() method on an EJB home, and deal with the resulting exceptions. Thus it remains tied to the EJB API and the complexity of the EJB programming model.
    • Implementing the Business Delegate pattern typically results in significant code duplication, where we have to write numerous methods that simply call the same method on the EJB.

    For these and other reasons, traditional EJB access, as demonstrated in applications such as the Sun Adventure Builder and OTN J2EE Virtual Shopping Mall, can reduce productivity and result in significant complexity.

    Spring steps beyond this by introducing codeless business delegates. With Spring you'll never need to write another Service Locator, another JNDI lookup, or duplicate methods in a hand-coded Business Delegate unless you're adding real value.

    For example, imagine that we have a web controller that uses a local EJB. We'll follow best practice and use the EJB Business Methods Interface pattern, so that the EJB's local interface extends a non EJB-specific business methods interface. (One of the main reasons to do this is to ensure that synchronization between method signatures in local interface and bean implementation class is automatic.) Let's call this business methods interface MyComponent. Of course we'll also need to implement the local home interface and provide a bean implementation class that implements SessionBean and the MyComponent business methods interface.

    With Spring EJB access, the only Java coding we'll need to do to hook up our web tier controller to the EJB implementation is to expose a setter method of type MyComponent on our controller. This will save the reference as an instance variable like this:

    private MyComponent myComponent;

    public void setMyComponent(MyComponent myComponent) {
    this.myComponent = myComponent;
    }

    We can subsequently use this instance variable in any business method.

    Spring does the rest of the work automatically, via XML bean definition entries like this. LocalStatelessSessionProxyFactoryBean is a generic factory bean that can be used for any EJB. The object it creates can be cast by Spring to the MyComponent type automatically.

    class="org.springframework.ejb.access.LocalStatelessSessionProxyFactoryBean">





    class = "com.mycom.myController"
    >

    There's a lot of magic happening behind the scenes, courtesy of the Spring AOP framework, although you aren't forced to work with AOP concepts to enjoy the results. The "myComponent" bean definition creates a proxy for the EJB, which implements the business method interface. The EJB local home is cached on startup, so there's normally only a single JNDI lookup. (There is also support for retry on failure, so an EJB redeployment won't cause the client to fail.) Each time the EJB is invoked, the proxy invokes the create() method on the local EJB and invokes the corresponding business method on the EJB.

    The myController bean definition sets the myController property of the controller class to this proxy.

    This EJB access mechanism delivers huge simplification of application code:

    • The web tier code has no dependence on the use of EJB. If we want to replace this EJB reference with a POJO or a mock object or other test stub, we could simply change the myComponent bean definition without changing a line of Java code
    • We haven't had to write a single line of JNDI lookup or other EJB plumbing code as part of our application.

    We can also apply the same approach to remote EJBs, via the similar org.springframework.ejb.access.SimpleRemoteStatelessSessionProxyFactoryBean factory bean. However, it's trickier to conceal the RemoteExceptions on the business methods interface of a remote EJB. (Spring does let you do this, if you wish to provide a client-side service interface that matches the EJB remote interface but without the "throws RemoteException" clause in the method signatures.)

    Testing

    As you've probably gathered, I and the other Spring developers are firm believers in the importance of comprehensive unit testing. We believe that it's essential that frameworks are thoroughly unit tested, and that a prime goal of framework design should be to make applications built on the framework easy to unit test.

    Spring itself has an excellent unit test suite. We've found the benefits of test first development to be very real on this project. For example, it has made working as an internationally distributed team extremely efficient, and users comment that CVS snapshots tend to be stable and safe to use.

    We believe that applications built on Spring are very easy to test, for the following reasons:

    • IoC facilitates unit testing
    • Applications don't contain plumbing code directly using J2EE services such as JNDI, which is typically hard to test
    • Spring bean factories or contexts can be set up outside a container

    The ability to set up a Spring bean factory outside a container offers interesting options for the development process. In several web application projects using Spring, work has started by defining the business interfaces and integration testing their implementation outside a web container. Only after business functionality is substantially complete is a thin layer added to provide a web interface.

    Since Spring 1.1.1, Spring has provided powerful and unique support for a form of integration testing outside the deployed environment. This is not intended as a substitute for unit testing or testing against the deployed environment. However, it can significantly improve productivity.

    The org.springframework.test package provides valuable superclasses for integration tests using a Spring container, but not dependent on an application server or other deployed environment. Such tests can run in JUnit--even in an IDE--without any special deployment step. They will be slower to run than unit tests, but much faster to run than Cactus tests or remote tests relying on deployment to an application server. Typically it is possible to run hundreds of tests hitting a development database--usually not an embedded database, but the product used in production--within seconds, rather than minutes or hours. Such tests can quickly verify correct wiring of your Spring contexts, and data access using JDBC or ORM tool, such as correctness of SQL statements. For example, you can test your DAO implementation classes.

    The enabling functionality in the org.springframework.test package includes:

    • The ability to populate JUnit test cases via Dependency Injection. This makes it possible to reuse Spring XML configuration when testing, and eliminates the need for custom setup code for tests.
    • The ability to cache container configuration between test cases, which greatly increases performance where slow-to-initialize resources such as JDBC connection pools or Hibernate SessionFactories are concerned.
    • Infrastructure to create a transaction around each test method and roll it back at the conclusion of the test by default. This makes it possible for tests to perform any kind of data access without worrying about the effect on the environments of other tests. In my experience across several complex projects using this functionality, the productivity and speed gain of such a rollback-based approach is very significant.
    Who's using Spring?

    There are many production applications using Spring. Users include investment and retail banking organizations, well-known dotcoms, global consultancies, academic institutions, government departments, defence contractors, several airlines, and scientific research organizations (including CERN).

    Many users use all parts of Spring, but some use components in isolation. For example, a number of users begin by using our JDBC or other data access functionality.

    Roadmap

    Since the first version of this article, in October 2003, Spring has progressed through its 1.0 final release (March 2004) through version 1.l (September 2004) to 1.2 final (May 2005). We believe in a philosophy of "release early, release often," so maintenance releases and minor enhancements are typically released every 4-6 weeks.

    Since that time enhancements include:

    • The introduction of a remoting framework supporting multiple protocols including RMI and various web services protocols
    • Support for Method Injection and other IoC container enhancements such as the ability to manage objects obtained from calls to static or instance factory methods
    • Integration with more data access technologies, including TopLink and Hibernate 3 as well as Hibernate 2 in the recent 1.2 release
    • Support for declarative transaction management configured by Java 5.0 annotations (1.2), eliminating the need for XML metadata to identify transactional methods
    • Support for JMX management of Spring-managed objects (1.2).
    • Integration with Jasper Reports, the Quartz scheduler and AspectJ
    • Integration with JSF as a web layer technology

    We intend to continue with rapid innovation and enhancement. The next major release will be 1.3 (final release expected Q3, 2005). Planned enhancements include:

    • XML configuration enhancements (planned for release 1.3), which will allow custom XML tags to extend the basic Spring configuration format by defining one or more objects in a single, validated tag. This not only has the potential to simplify typical configurations significantly and reduce configuration errors, but will be ideal for developers of third-party products that are based on Spring.
    • Integration of Spring Web Flow into the Spring core (planned for release 1.3)
    • Support for dynamic reconfiguration of running applications
    • Support for the writing of application objects in languages other than Java, such as Groovy, Jython or other scripting languages running on the Java platform. Such objects will benefit from the full services of the Spring IoC container and will allow dynamic reloading when the script changes, without affecting objects that were given references to them by the IoC container.

    As an agile project, Spring is primarily driven by user requirements. So we don't develop features that no one has a use for, and we listen carefully to our user community.

    Spring Modules is an associated project, led by Rob Harrop of Interface21, which extends the reach of the Spring platform to areas that are not necessarily integral to the Spring core, while still valuable to many users. This project also serves as an incubator, so some of this functionality will probably eventually migrate into the Spring core. Spring Modules presently includes areas such as integration with the Lucene search engine and OSWorkflow workflow engine, a declarative, AOP-based caching solution, and integration with the Commons Validator framework.

    Interestingly, although the first version of this article was published six months before the release of Spring 1.0 final, almost all the code and configuration examples would still work unchanged in today's 1.2 release. We are proud of our excellent record on backward compatibility. This demonstrates the ability of Dependency Injection and AOP to deliver a non-invasive API, and also indicates the seriousness with which we take our responsibility to the community to provide a stable framework to run vital applications.

    Summary

    Spring is a powerful framework that solves many common problems in J2EE. Many Spring features are also usable in a wide range of Java environments, beyond classic J2EE.

    Spring provides a consistent way of managing business objects and encourages good practices such as programming to interfaces, rather than classes. The architectural basis of Spring is an Inversion of Control container based around the use of JavaBean properties. However, this is only part of the overall picture: Spring is unique in that it uses its IoC container as the basic building block in a comprehensive solution that addresses all architectural tiers.

    Spring provides a unique data access abstraction, including a simple and productive JDBC framework that greatly improves productivity and reduces the likelihood of errors. Spring's data access architecture also integrates with TopLink, Hibernate, JDO and other O/R mapping solutions.

    Spring also provides a unique transaction management abstraction, which enables a consistent programming model over a variety of underlying transaction technologies, such as JTA or JDBC.

    Spring provides an AOP framework written in standard Java, which provides declarative transaction management and other enterprise services to be applied to POJOs or - if you wish - the ability to implement your own custom aspects. This framework is powerful enough to enable many applications to dispense with the complexity of EJB, while enjoying key services traditionally associated with EJB.

    Spring also provides a powerful and flexible MVC web framework that is integrated into the overall IoC container.

    More information

    See the following resources for more information about Spring:

    We pride ourselves on excellent response rates and a helpful attitude to queries on the forms and mailing lists. We hope to welcome you into our community soon!

    About the Author

    Rod Johnson has almost ten years experience as a Java developer and architect and has worked with J2EE since the platform emerged. He is the author of the best-selling Expert One-on-One J2EE Design and Development (Wrox, 2002), and J2EE without EJB (Wrox, 2004, with Juergen Hoeller) and has contributed to several other books on J2EE. Rod serves on two Java specification committees and is a regular conference speaker. He is CEO of Interface21, an international consultancy that leads Spring Framework development and offers expert services on the Spring Framework and J2EE in general.

    (http://www.theserverside.com/tt/articles/content/SpringFramework/article.html)