UTF-8 all the way through the stack

July 26, 2010 at 3:11 pm Leave a comment

We need to look at UTF-8 support in the following areas:

  1. URLs
  2. Apache
  3. HTML
  4. Javascript
  5. POST data
  6. File download (Content-Disposition)
  7. JSPs
  8. Java code
  9. Tomcat
  10. Oracle
  11. File system

I’ll go through each of these areas and explain how well they are supported by default and what changes you might need to make to support UTF-8 in each area.

URLs

URLs should only contain ASCII characters. The ASCII character set is quite restrictive if you want to use Chinese characters for instance, so there is some encoding needed here. So if you’ve got a file with a Chinese character and you want to link to it, you need to do this:

“中.doc” ->  “%E4%B8%AD.doc”

Thankfully this can be done with a bit of Java:

java.net.URLEncoder.encode(“中.doc”,”UTF-8″);

So, whenever you need to generate something for the address bar or a direct or something like that, you must URL encode the data. You don’t have to detect this as it doesn’t hurt to do this for links which are just plain old ASCII as they don’t get changed, as you can see with the “.doc” ending on the above example.

Apache

Generally you don’t need to worry about Apache as it shouldn’t be messing with your HMTL or URLs. However, if you are doing some proxying with mod_proxy then you might need to have a think about this. We use mod_proxy to do proxying from Apache through to Tomcat. If you’ve got encoded characters in URL that you need to convert into some query string for your underlying app then you’re going to have a strange little problem.

If you have a URL coming into Apache that looks like this:

http://mydomain/%E4%B8%AD.doc and you have a mod_rewrite/proxy rule like this:

RewriteRule ^/(.*) http://mydomain:8080/filedownload/?filename=$1 [QSA,L,P]

Unfortunately the $1 is going to get mangled during the rewrite. QSA (QueryStringAppend) actually deals with these characters just fine and will send this through untouched, but when you grab a bit of the URL such as my $1 here then the characters get mangled as Apache tries to do some unescaping of its own into ISO-8859-1, but it’s UTF-8 not ISO-8859-1 so it doesn’t work properly. So, to keep our special characters in UTF-8, we’ll escape it back again.

RewriteMap escape int:escape
RewriteRule ^/(.*) http://mydomain:8080/filedownload/?filename=${escape:$1} [QSA,L,P]

Take a look at your rewrite logs to see if this is working.

HTML

HTML support for UTF-8 is good, you just need to make sure you set the character encoding properly on your pages. This should be as simple as bit of code in the HEAD of your page:

<meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″>

You should be able to write out UTF-8 characters for real into the page without any special encoding.

Javascript

Javascript supports UTF-8 characters very well so as long as you don’t use escape() then when your users enter characters, they shouldn’t get mangled. We also use AJAX do do some functions in our application so you need to think about that as well but again, it should just work.

All of the above only holds true if you set the character encoding right on your surrounding HTML.

POST data

Getting POST datafrom the user in the right format is simple too. As long as your HTML has the right encoding then you should be ok.

File download (Content-Disposition)

If you want to serve files for download from your app, as we obviously do with Files.Warwick then you’ll need to understand how browsers deal with non ASCII characters in file names when downloading. Unfortunately the standard is not exactly well defined as no one really thought about UTF-8 file names until recently.

Internet Explorer supports URL encoded file names but Firefox supports a rather strange Base64 encoded value for high byte file names, so something like this should do the job:



String userAgent = request.getHeader("User-Agent");

String encodedFileName = null;



if (userAgent.contains("MSIE") || userAgent.contains("Opera")) {

	encodedFileName = URLEncoder.encode(node.getName(), "UTF-8");

} else {

	encodedFileName = "=?UTF-8?B?" + new String(Base64.encodeBase64(node.getName().getBytes("UTF-8")), "UTF-8") + "?=";

}



response.setHeader("Content-Disposition", "attachment; filename=\"" + encodedFileName + "\"");

Obviously you can tweak the user agent detection to be a bit smarter than this.

JSPs

UTF-8 support in JSPs is pretty much a one liner.

<%@ page language=”java” pageEncoding=”utf-8″ contentType=”text/html;charset=utf-8″ %>

Include that at the top of every single JSP perhaps in a prelude.jsp file and you’re away.

Java code

As long as you source strings are properly encoded then generally you can rely on Java to keep your UTF-8 encoded input. However, be careful what String functions you perform on your UTF-8 data. Be sure to do things like this:

myStr.getBytes(“UTF-8”) rather than just myStr.getBytes()

If you don’t then you’ll most likely end up with ISO-8859-1 bytes instead. If for some reason you can not get your input data to be UTF-8, and it is coming in with a different encoding, you could do something like this to convert it to UTF-8:

String myUTF8 = new String(my8859.getBytes(“ISO-8859-1″),”UTF-8″)

Debugging can be fun with high byte characters as generally logging to a console isn’t going to show you the characters you are expecting. If you did this:

System.out.println(new String(new byte[] { -28, -72, -83},”UTF-8″)

Then you’d probably just see a ? rather than the Chinese character that it really should be. However, you can make log4j log UTF-8 messages. Just add

<param name=”Encoding” value=”UTF-8″/>

To the appender in your log4j.xml config. Or this:

log4j.appender.myappender.Encoding=UTF-8

To your log4j.properties file. You might still only see the UTF-8 data properly if you view the log file in an editor/viewer that can view UTF-8 data (Windows notepad is ok for instance).

Tomcat

By default Tomcat will encode everything in ISO-8859-1. You can in theory override this by setting the incoming encoding of the HttpServletRequest to be UTF-8, but once some of the request is read, then the encoding is set, so chances are you might not be able to manually do:

request.setCharacterEncoding(“UTF-8″)

early enough to have an effect. So instead you can tell Tomcat you want it to run in UTF-8 mode by default. Just add the following to the Connector you want UTF-8 on in your server.xml config file in Tomcat.

URIEncoding=”UTF-8”

Not doing this has the fun quirk that if you have a request like this:

/test.htm?highByte=%E4%B8%AD

If you did request.getQueryString() you’d get the raw String that “highByte=%E4%B8%AD”, but if you did request.getParameter(“highByte”) then you’d get the ISO-8859-1 encoded value instead which would not be right. Sigh.

Oracle

You could just URL encode all of your data and put it into the database in ASCII like you always used to. However, that doesn’t make for very readable data. There are two options here although I’ve only tried the one.

  1. Set the default character encoding of your Oracle database to be UTF-8. However, it is set on a per server basis, not a per schema basis so your whole server would be affected.
  2. Use NVARCHAR2 fields instead of VARCHAR2 fields and you can store real UTF-8 data.

We went for option 2 as we have a shared Oracle server. First of all, convert all fields that you want to store UTF-8 data in from VARCHAR2s to NVARCHAR2s. Be careful as I don’t think you can change back!

You then need to tell your JDBC code somehow that it needs to send data that the NVARCHAR2 fields can undertand. There are a couple of ways of doing this too:

  1. Set the defaultNChar property on the connection to true.
  2. Use the setFormOfUse() method that is an Oracle specific extension to the PrepearedStatement

I went for option 1 as the problem with option 2 is that you have to somehow get at the Oracle specific connection or prepared statement within your Java code. This is not fun as you’ll often be using a connection pool that will hide away these details.

Files system

File system support of UTF-8 characters is again pretty good, but you are sometimes going to have issues with viewing the file listings. I just couldn’t get a UTF-8 file name to display properly over a putty SSH connection. Through a simple Java test program, I could write and read back a UTF-8 file name on our Solaris 10 box, but all I could ever actually read when doing an “ls” was ?????.doc. So for the sake of maintainability of the file system I went for a URL encoded version of the file. This isn’t ideal, but it works.

Conclusion

As you can see, there is quite a lot of work involved in supporting UTF-8 throughout. A lot of my time was spent researching as my understanding of encoding issues wasn’t great. Now that I’ve put together this guide, I hope all of our apps can start to work towards full UTF-8 support.

Of course the above guide is quite specific to my experience in the app I was dealing with and the environment I work in so your experiences might be more or less painful🙂

Entry filed under: java. Tags: .

Cài đặt AJAX aptana plugin cho eclipse Những lời có cánh dành cho nàng :))

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed



%d bloggers like this: