RELEASE NOTES - Java Development Kit 1.1.7B for SCO Operating Systems

RELEASE NOTES

Java Development Kit 1.1.7B for SCO Operating Systems


Contents

License
Introduction
Changes in This Release
Supported Platforms
Packages
The JRE Package
Installation
Documentation
Using JDK 1.1.7B for SCO
Extensions to Sun JDK 1.1.7B
The JIT Compiler
Threads: Green or Native
Native Methods
Debugging Native Methods
JDBC
Additional Implementation Notes
Fonts
Conformance
Known Problems

License

Please read the license.txt file for the license terms of this SCO product.

Introduction

These are the release notes for the JavaTM Development Kit (JDK), Release 1.1.7B for SCO Operating Systems, hereafter referred to as "SCO JDK 1.1.7B".

This product is a full implementation of Sun Microsystems' Java Development Kit 1.1.7B. It enables SCO OEMs, ISVs, and end users to develop and run applets and applications that conform to the Java 1.1 Core API.

SCO JDK 1.1.7B has the same functionality as Sun's baseline JDK 1.1.7B release.

In addition, SCO JDK 1.1.7B contains additional fixes and internal enhancements due to Sun's Solaris-based JDK 1.1.3A and 1.1.3D releases. This release incorporates elements of native threads support found in Sun's JDK 1.2. Finally, this release incorporates a Sun fix from Sun's JDK 1.1.8 for a bytecode verifier security problem.

Changes in This Release

SCO JDK 1.1.7B is a full update release to the earlier SCO JDK 1.1.7A release. Compared to SCO JDK 1.1.7A, this release contains:

Supported Platforms

The SCO JDK 1.1.7B product will run on the following versions of SCO's operating system platforms, with the indicated provisos:

SCO JDK 1.1.7B cannot be used with older versions of the OSRcompat and UW2compat packages, such as those that were released with UnixWare 7.0 or UnixWare 7.1.

SCO JDK 1.1.7B is not supported on older versions of these SCO operating systems, such as OpenServer 5.0.0, UnixWare 2.1.2, or UnixWare 7.0.1.

For the most part the JDK is identical for all three platforms, and everything in these release notes applies to all three platforms unless otherwise noted.

Packages

SCO JDK 1.1.7B is distributed in the following packages:

Package jdk117 includes the essential execution engine of Java, that is, what you need to run Java applications:

Package jdk117 also includes some Java development tools:

Finally, package jdk117 also includes additional components to support distributed applications and database access:

Package jdk117pls is an optional supplement to jdk117 that includes several kinds of additional materials useful in Java development work:

In addition to Sun's documentation in the jdk117pls package, SCO JDK 1.1.7B provides UnixWare 7 users with documentation integrated with SCOhelp in these packages:

The JRE Package

Not included with the SCO JDK 1.1.7B distribution material, but available separately from SCO if required, is the SCO JDK 1.1.7B JRE package jre117 (size about 7MB).

This package matches Sun's product configuration of the Java Runtime Environment (JRE). The JRE is the minimum standard Java platform for running Java programs. It contains the Java virtual machine, Java core classes, and supporting files. The JRE does not contain any of the development tools (such as appletviewer or javac) or classes that pertain only to a development environment. It also contains a different configuration, with different file names, of the Java core classes. In addition, it uses the jre command rather than java to execute the Java virtual machine. See Sun documentation for full details or browse jre.html in the JRE installed directory.

The purpose of the JRE package is so that independent software vendors (ISVs) and others can bundle it with their Java application, if desired. That way, the application and the Java version it has been tested on, can be installed together on customer machines, rather than relying on whatever Java version is currently installed on those machines.

The jre117 package, unlike the jdk117* packages, is installed as some_dir/opt/jre-1.1.7/ anywhere within a directory structure, based on input given by the installer during the pkgadd command. This allows ISVs to govern where the JRE is placed. The default for some_dir/ will be /.

Note again that the JRE package is intended for use by ISVs; if you are an end user, you should simply use the regular jdk117 package (and jdk117pls package if desired) instead. Note also that the regular jdk117 package also contains a version of the jre command, so that scripts that are written to work when the JRE is used will also work when the JDK is used.

Installation

There are a few differences in how the JDK is installed on each platform:

OpenServer

If the UDK Compatibility Module for OpenServer (package name OSRcompat) is not already installed on your system, you need to mount the UDK CD-ROM and install the package OSRcompat:

# mount -r /dev/cd0 /mnt # pkgadd -d /mnt OSRcompat

When that installation is complete, install the core JDK (package name jdk117):

# pkgadd -d /mnt jdk117

Then you can install the JDK additional materials, if desired (package name jdk117pls):

# pkgadd -d /mnt jdk117pls

UnixWare 2

If the UDK Compatibility Module for UnixWare (package name UW2compat) is not already installed on your system, you need to mount the UDK CD-ROM and install the package UW2compat:

# mount -F cdfs -r /dev/cdrom/* /mnt # pkgadd -d /mnt UW2compat

If your machine has more than one CD-ROM drive, specify the CD-ROM device exactly (e.g. /dev/cdrom/c0b0t6l0).

When that installation is complete, install the core JDK 117A (package name jdk117):

# pkgadd -d /mnt jdk117

Then you can install the JDK 117A additional materials, if desired (package name jdk117pls):

# pkgadd -d /mnt jdk117pls

The graphical desktop tool App_Installer may also be used to install these packages.

Note that you may need to increase certain system memory limits; see Using JDK 1.1.7B for SCO below.

UnixWare 7

Mount the CD-ROM and install the JDK (package name jdk117):

# mount -F cdfs -r /dev/cdrom/* /mnt # pkgadd -d /mnt jdk117
If your machine has more than one CD-ROM drive, specify the CD-ROM device exactly (e.g. /dev/cdrom/c0b0t6l0).

Similarly for the other packages:

# pkgadd -d /mnt jdk117pls # pkgadd -d /mnt jdkdoc # pkgadd -d /mnt jdkman

Installation Location and Multiple JDK Versions

While the JDK is accessed through the /usr/java pathname, installation actually places its contents into /opt/jdk-1.1.7/. Then a symbolic link is made from /usr/java to /opt/jdk-1.1.7/.

If you already have SCO JDK 1.1.7A installed on your system, SCO JDK 1.1.7B may be installed on top of it. There is no need to remove the JDK 1.1.7A packages first, although you can do that if you like. You cannot have both JDK 1.1.7A and JDK 1.1.7B installed on a system at the same time, unless you manually rename the existing /opt/jdk-1.1.7/ to something else before installing JDK 1.1.7B.

You can have multiple "point" versions of JDK 1.1.x, such as SCO JDK 1.1.3w and 1.1.7B, installed on a system at the same time. Installation of JDK 1.1.7B will not automatically remove your previous JDK point versions from the system. The only thing that is affected by the JDK 1.1.7B installation is the /usr/java link. By default, /usr/java will always point to the latest JDK.

After JDK 1.1.7B is installed, and if you had a previous version of the JDK, for example JDK 1.1.3, the link of /usr/java to JDK 1.1.3 will be removed. /usr/java will point to JDK 1.1.7B. If you remove JDK 1.1.7B from the system, and if you have another verion of JDK on your system, for example JDK 1.1.3, the /usr/java link will be restored to JDK 1.1.3.

If you do want to access and use an alternate JDK on your system, simply invoke it by its /opt pathname, for example, /opt/jdk-1.1.3/bin/java will start up the virtual machine.

Documentation

Documentation for the JDK 1.1.7B is contained in the jdk117pls package and, for UnixWare 7 only, also in the jdkdoc and jdkman packages. All of the documentation is in HTML format and may be viewed with any browser you have installed on your system.

DocumentFile/Link Name
these release notesReleaseNotes.html
Sun documentation for JDK 1.1.7B docs/index.html
Sun and SCO demos for JDK 1.1.7Bdemo/
documentation on SCO's JDBC implementation
and SCO's SQL-Retriever product
see JDBC section

Note that the documentation included in the jdk117pls package is not integrated into OpenServer SCOhelp or UnixWare 2 Dynatext Library graphical help systems. However, the documentation included in the jdkdoc and jdkman packages is integrated with SCOhelp on the UnixWare 7 platform.

Also note that much of this documentation is from Sun, but should be read in an SCO context. For instance, for "Solaris" read any of the three SCO platforms (UnixWare 7, OpenServer, or UnixWare 2). For customer support, any of the normal SCO support mechanisms should be used, rather than contacting Sun.

Using JDK 1.1.7B for SCO

In general, use of SCO JDK 1.1.7B follows that which is described in the Sun documentation.

After the JDK packages are installed, you probably want to set PATH in your .profile to include the directory where the JDK commands are installed, /usr/java/bin. On UnixWare 7 systems, this will usually have been done for you already when your account was created.

On UnixWare 2, applications of significant size are likely to get "out of memory" errors with the default memory limits provided by the operating system. To fix this, do the following as root:

# /etc/conf/bin/idtune -m HVMMLIM 0x7FFFFFFF
# /etc/conf/bin/idtune -m HDATLIM 0x7FFFFFFF
# /etc/conf/bin/idtune -m SVMMLIM 0x7FFFFFFF
# /etc/conf/bin/idtune -m SDATLIM 0x7FFFFFFF
# /etc/conf/bin/idbuild
and then reboot to rebuild the kernel.

Extensions to Sun JDK 1.1.7B

SCO has provided only one functional extension to Sun's JDK 1.1.7B, and it is useful only on the UnixWare 7 platform.

Java Classes as First-Class Executables

When javac is used to compile one or more classes, it will set the execute permissions bit on for the .class file if the class contains a main method. (This happens on all three platforms.)

Then, on UnixWare 7 only, you can execute a Java application simply by giving the name of the main class:

$ foo.class
UnixWare 7 will look for foo.class by use of the PATH environment variable, just as it would for any other executable. foo.class must also be in the CLASSPATH, as in normal execution.

Furthermore, by making a hard link or symbolic link such as

$ ln -s foo.class foo
you will be able to execute the application simply by saying
$ foo
For instance, this gives you the ability let users invoke utilities without knowing the utilities are written in Java. For this to work you must keep the name prefix intact and the class file intact. That is, you have to keep foo.class somewhere, and then you can make a hard or soft link of foo to it. foo can be in another directory, but you can't change the name; i.e., you can't link bar to it. That's because once the system invokes the JVM, it expects to find a foo.class file there. For this same reason you also can't just rename foo.class to foo, because the JVM will still need a foo.class. (You could copy foo.class to foo, but that will of course waste disk space compared to a link.)

Of course, you can always use the traditional way of executing a Java application:

$ java foo
In this case, java must be in the PATH, and foo.class must be in the CLASSPATH.

The JIT Compiler

Historical note: There was no JIT compiler in the SCO JDK 1.1 or 1.1.3 products. A JIT compiler was available in SCO JDK 1.1.3u for UnixWare 7 and UnixWare 2, but it was a separate package that required a Java WorkShop or Java Studio license. Due to complications with this license, no JIT compiler was available in SCO JDK 1.1.3w or SCO JDK 1.1.7A.

Now, the JIT compiler is part of the base SCO JDK product and package jdk117. It is always there and is always executing unless it is explicitly turned off by the user. It does not have to be separately licensed or installed, and is no longer tied to the Java WorkShop and Java Studio products. It is also available for OpenServer (5.0.5 only) for the first time.

A just-in-time (JIT) compiler improves Java performance by, as the program is executing (hence the name), compiling Java method bytecode to native x86 machine code. On subsequent executions of the method the machine code is executed instead of the bytecode being re-interpreted. By default, this JIT compiler compiles or inlines every method in the program, the first time the method is invoked (except for a few primordial methods which cannot be compiled until the JIT compiler itself is loaded). Dynamically loaded classes are compiled after they are loaded. Class initializers are not compiled. If for some reason a method cannot be compiled, it is interpreted. Jitted code is not saved across sessions of the JVM.

How much the JIT improves performance depends greatly upon the nature of the application being run. Applications that are compute-intensive benefit the most, while those that are dominated by object creation/garbage collection, graphics, or networking tend to benefit less. The only way to know the benefit for your application is to measure it.

Controlling the JIT

The JIT compiler runs by default. If you want to suppress running of the JIT (either to do performance analysis or because you suspect it may be causing a problem), you can turn it off in two ways: by setting the JAVA_COMPILER environment variable to the empty value, or by setting the java.compiler property to the empty value. Examples:

$ java hello			# JIT will run

$ JAVA_COMPILER= java hello	# JIT will not run

$ java -Djava.compiler= hello	# JIT will not run

The environment variable JIT_ARGS provides further control over execution of the JIT. You can define JIT_ARGS as a string containing one or more of these options:

traceshow which methods are compiled or inlined
compile(method-list)compile just these methods
exclude(method-list)do not compile just these methods
dump(method-list)dump the generated assembly code

The method-list is a comma-separated list of method names, possibly qualified by class names. The class name part is separated by slashes. If only a class name is given, all methods of that class are selected. If no method-list is given, the option applies to all methods. Examples:

$ JIT_ARGS=trace java hello	# show what the JIT does

$ JIT_ARGS="dump(hello)" java hello	# show how the JIT does it (lots of output!)

$ JIT_ARGS="trace compile(myclass.)" java myclass	# only jit myclass's methods

$ JIT_ARGS="trace exclude(java/lang/System.exit)" java myclass	# jit everything except java.lang.System.exit()
In particular, JIT_ARGS=trace is useful in verifying that the JIT is installed correctly and is actually executing.

JIT Heuristics

All JIT compilers come with a trade-off: the cost of the time it takes to stop execution of a method and do the compilation to machine code (once per method per program) versus the benefit of the time saved by subsequent invocations of that method running as machine code rather than being interpreted. For example, for a short method that is only called once, this trade-off is obviously a loss, while for a long method called many times, this trade-off is clearly a win.

As stated in The JIT Compiler above, by default the JIT compiles every method the first time it sees it. The environment variable JIT_MIN_TIMES can be set to a non-negative integer n to indicate a different approach: a method will not be jitted until the method has already executed at least n times. In other words, this is a heuristic that posits that those methods that have already been executed a lot, will be the ones that will tend to execute a lot for the rest of program execution. An example of its use would be:

$ JIT_MIN_TIMES=40 java my_app   # methods will be jitted after 40th time called

For some short- or medium-lived applications, use of JIT_MIN_TIMES may improve the performance of the JIT. One example is the javac Java language translator, which has now been set up to use the JIT heuristics mechanism. The only way to find out for your application is to experiment with it, using different minimum count values. However for long-running, server-oriented applications, the default strategy of always jitting when a method is first called is likely to be the best.

Threads: Green or Native

Threads are an essential part of the Java language and API set, and every Java implementation must decide how to implement Java threads. The SCO JDK 1.1.7B, like many other implementations, supports two alternate internal threads models: "green threads" and "native threads". Note that Java application code does not change at all from one model to the other; the threads model is an internal, "under the covers" difference, although one that can have an important impact on the behavior and performance of a Java application.

"Green threads" refers to a model where the Java virtual machine itself creates, manages, and context switches all Java threads within one operating system process. No operating system threads library is used.

"Native threads" refers to a model where the Java virtual machine creates and manages Java threads using the operating system threads library - named libthread on UnixWare - and each Java thread is mapped to one threads library thread.

In SCO JDK releases prior to JDK 1.1.3w only the green threads model was supported. As of SCO JDK 1.1.3w, SCO JDK 1.1.7A, and in this release, both models are supported (except on OpenServer), and it is up to you to decide which to use for your application. Green threads is the default. To specify the threads model, set the THREADS_FLAG environment variable to either green or native. For convenience, the java command also has an option -green or -native that can be used; but for other commands, the environment variable must be used. Some examples:

$ java my_app				# green threads will be used

$ THREADS_FLAG=green java my_app	# green threads will be used

$ THREADS_FLAG=native java my_app	# native threads will be used

$ java -native my_app			# native threads will be used

$ THREADS_FLAG=native appletviewer my_applet.html   # only way to set native threads

Advantages of Green Threads

One reason to use green threads is that it is the more mature implementation.

Another reason to use it is that switching the threads model may change the behavior of the Java application. The Java language specification does not give a lot of precise details about how Java threads are scheduled, so there is some room for implementation dependencies in this area (unlike the rest of the Java specification). Java applications that (incorrectly) make assumptions about how threads will be scheduled may work under one threads model but not under the other. Since most applications up to this point have been written under green threads (that was the first model available on most platforms, including SCO), chances are that the native threads model would be more likely to expose incorrect application dependencies.

For both of the above reasons, green threads is the default implementation, at least for this release of the SCO JDK.

Finally, on a uniprocessor machine, green threads sometimes has performance advantages over native threads, although the difference tends to be relatively minor.

Advantages of Native Threads

There are two major potential advantages to using native threads, in addition to it intuitively being the "right way" to implement Java threads.

The first advantage is performance on multiprocessor (MP) machines. In green threads all Java threads execute within one operating system lightweight process (LWP), and thus UnixWare has no ability to distribute execution of Java threads among the extra processors in an MP machine. But in the native threads model, each Java thread is mapped to a UnixWare threads library multiplexed thread, and the threads library will indeed map those threads to different LWPs as they are available. Furthermore under native threads the Java virtual machine will expand the number of LWPs available to the threads library, one for each additional processor in the MP configuration.

The performance benefit from using native threads on an MP machine can be dramatic. For example, using an artificial benchmark where Java threads are doing processing independent of each other, there can be a 3x overall speed improvement on a 4-CPU MP machine.

The second major advantage of native threads is when native methods are being used. In order for the green threads implementation to perform non-blocking I/O, a number of system calls are "wrapped" by the JVM to use green threads synchronization primitives and the like. If native methods make system calls in some way that the green threads JVM doesn't expect, these wrappers often cause severe problems. As a consequence, there are a number of restrictions placed upon native methods in green threads mode, as listed in the section Native Methods below.

In comparison, in native threads mode there is no need for I/O system call wrappers, and there are no restrictions upon what native methods may do, as long as they are coded to be thread-safe and are built with -Kthread.

A final advantage of native threads is that is sometimes gives a clearer picture of a program's activities when debugging at the native methods level with the UDK debugger.

Native Methods

Both the JNI-style native methods added as of JDK 1.1 and the old-style, lower-level native methods from JDK 1.0.2 are supported in this release.

C and C++ native methods must be compiled and linked with the SCO UnixWare/OpenServer Development Kit (UDK). This means that native methods cannot be built with the existing software development kit on OpenServer or UnixWare 2. Some of the reasons for this requirement include:

All of these items are satisfied by the UDK but not by the existing software development kit on OpenServer or UnixWare 2. The UDK can be used either on OpenServer 5 or UnixWare 2 itself, or native method dynamic libraries can be built with the UDK on UnixWare 7 Gemini and then moved to OpenServer or UnixWare 2.

Another important limitation with native methods is upon the kinds of system operations that a native method can do when "green threads" is being used as the Java threads implementation model (see the Threads: Green or Native section above). Under green threads the following restrictions are in place:

None of these limitations exist with the "native threads" implementation model, so if you are coding native methods and that model is available to you, it is strongly recommended that you use it.

SCO-specific examples of the commands needed to build old- and new-style native methods with C and C++ are included in the demos part of the JDK 1.1.7B distribution (when jdk117pls package is installed), in the directory /usr/java/demo/,under the subdirectories native_c_demo, jni_c_demo, native_c++_demo, and jni_c++_demo. In addition, the subdirectory jni_invoc_demo gives an example for C and C++ of the JNI Invocation API. It is highly recommended that you follow the command invocations given in these examples, for unless the native code is built correctly, it will not work as intended.

Debugging Native Methods

Debugging of Java applications is done with the JDK-provided jdb debugger, as described in the relevant Sun documentation.

Debugging of C or C++ native methods, however, must be done with the UDK debugger. This section describes how to go about this.

One thing to note first is that after-the-fact core dumps from the JVM (which might be caused by a native methods bug) will usually have a few levels of signal handlers on the stack subsequent to the actual point of failure. This is true in both green threads and native threads modes. An example would be:

$ debug -c  core.993 /usr/java/bin/x86at/green_threads/java_g
Core image of java_g (process p1) created
CORE FILE [__lwp_kill]
Signal: sigabrt
        0xbffc8a72 (__lwp_kill+12:)      ret
debug> stack
Stack Trace for p1, Program java_g
*[0] __lwp_kill(0x1, 0x6)       [0xbffc8a72]
 [1] sysAbort(presumed: 0xbfffdbb4, 0xbf753ca0, 0)      [../../../../src/unixwar
e/java/runtime/system_md.c@283]
 [2] signalHandlerPanic(sig=8, info=0x8046f00, uc=0x8046d00)    [../../../../src
/unixware/java/green_threads/src/interrupt_md.c@491]
 [3] _sigacthandler(presumed: 0x8, 0x8046f00, 0x8046d00)        [0xbffb6831]
 [4] nfib_fib(s=0xbf708bc8, n=0, presumed: 0)   [fib.C@27]
 [5] JIT_CALLBACK1_MARKER()     [0xbf4c8fa8]
debug>

The actual point of failure is at frame level [4] in this case. Note also that when the JIT is in use, you don't see the rest of the stack. If you turn off the JIT, then you can see it, but it will just be a bunch of internal routines inside the JVM (with names like do_execute_java_method ) that won't tell you much. In other words, there is no debugging tool available that will show you both the Java stack and the native methods stack at the same time.

Of course, to do real native methods debugging you'll want to run the JVM from within the debugger. To do this you'll need to invoke the JVM executable directly. First, you should use the java_g version of the JVM, since that contains debugging information. Second, if you look at /usr/java/bin/java_g, you'll see that it's a link to a script called .java_wrapper, that sets up the LD_LIBRARY_PATH, CLASSPATH, and JAVA_HOME environment variables before calling the actual JVM executable in /usr/java/bin/x86at/green_threads/java_g.

If you invoke /usr/java/bin/java_g through ksh -x you'll see the values those environment variables are set to; you can set those manually at the command line (store in a script that you "dot" if you debug frequently), then invoke the debugger:

$ . setup_java	# your script to set LD_LIBRARY_PATH and CLASSPATH
$ debug -ic	# or can use graphical version
debug> create /usr/java/bin/x86at/green_threads/java_g my_app
debug> run
debug>

Another complication sets in when you want to use symbols (to set breakpoints on, for instance) that are outside of the JVM, such as in native methods. The dynamic libraries that contain native methods are loaded by the JVM via the dlopen call, and until this happens, symbols in the native methods won't be visible to the debugger.

The solution to this is to set a breakpoint inside the JVM at the point where the dynamic library has been loaded, but before code in the libraries is called. For SCO JDK 1.1.7B the appropriate breakpoint is linker_md.c@199. Here is an example demonstrating both the problem and the solution:

$ debug -ic
debug> create /usr/java/bin/x86at/green_threads/java_g my_app
debug> stop my_nativemethod_function
Error: No entry "my_nativemethod_function" exists

debug> stop linker_md.c@199
EVENT [1] assigned
debug> run
STOP EVENT TRIGGERED: linker_md.c@199  in p1 [sysAddDLSegment in ../../../../src/unixware/java/runtime/linker_md.c]
199:        dlsegment[useddlsegments].fname = strdup(fn);
debug> stop my_nativemethod_function
EVENT [2] assigned
debug> run
STOP EVENT TRIGGERED: my_nativemethod_function in p1 [my_nativemethod_function in myfile.C]
68:         bool finished = false;
debug>
You can debug normally from that point on.

If you do a lot of this kind of debugging it can be useful to set up an alias in your ~/.debugrc file:

alias cnm create /usr/java/bin/x86at/green_threads/java_g $1 ; run -u linker_md.c@199
Then just giving the cnm some_class command to the debugger will bring you to the point where you can set breakpoints in your native method code.

This technique of using an alias can allow you to define a whole series of convenience commands to set up a typical native methods debugging session. An example of a full .debugrc alias for JVM green threads debugging might be look something like:

alias cjvm set $CLASSPATH=".:/home/whatever/java:/usr/java/lib/classes.zip" ; export $CLASSPATH ; 
	set $LD_LIBRARY_PATH="/usr/java/lib/x86at/green_threads:/usr/lib:/usr/X/lib" ;  export $LD_LIBRARY_PATH ; 
	set $JAVA_HOME="/usr/java"; export $JAVA_HOME ;
	create -f none /usr/java/bin/x86at/green_threads/java_g $1 $2 $3 $4 $5 $6 $7 $8 ; 
	set %stack_bounds=no ; signal -i cld poll alrm SIGUSR1 ; 
	run -u linker_md.c@199

The setting of the CLASSPATH, LD_LIBRARY_PATH, and JAVA_HOME environment variables follows the discussion above. The create -f none command tells the debugger to ignore child processes caused by forks done within the X Windows libraries. The stack_bounds setting avoids spurious warnings due to jitted code being executed. The signal -i command keeps the debugger from stopping on innocuous signals that the JVM handles.

For debugging when the JVM is using native threads, simply change the green to native in the above paths. You will probably also want to add a

	set %thread_change=ignore ;
statement as well, depending upon what you are trying to debug.

JDBC

Java Database Connectivity is a standard SQL database access interface for Java, providing uniform access for Java applications to a wide range of relational databases.

SCO JDK 1.1.7B contains SCO's implementation of JDBC and includes the SCO JDBC driver. This implementation conforms to Sun's JDBC 1.2 specification. SCO's JDBC implementation is built upon SCO's SQL-Retriever product. For more information on SCO SQL-Retriever, please visit www.vision.sco.com .

There is no need to separately install the SCO JDBC implementation, since it is part of the jdk117 installation. It is necessary to separately install the SQL-Retriever product if you are interested in using JDBC.

Additional Implementation Notes

In general one of the important characteristics of Java is that it behaves in exactly the same fashion on all platforms. However there are a few areas where it may be useful to know how the JDK has been implemented on SCO platforms. Some of these have already been discussed above; others are described here.

System Properties

If it is necessary for application code to determine which of the three SCO platforms it is running on, the Java class System.Properties can be queried. Here are some of the values that will be returned on all SCO platforms:
java.home=/usr/java
java.vendor=SCO
java.vendor.url=http://www.sco.com/
java.version=1.1.7B
java.class.version=45.3

while here are values that are specific to OpenServer 5.0.5:

os.arch=IA32
os.name=OpenServer
os.version=5.0.5

UnixWare 2.1.3:

os.arch=IA32
os.name=UnixWare
os.version=2.1.3

and UnixWare 7.1.0:

os.arch=IA32
os.name=UnixWare
os.version=7.1.0

Abstract Windowing Toolkit

This implementation uses the X Windows System, version X11R6.1, to implement the Java Abstract Windowing Toolkit.

java -debug

This implementation changes any use of the java -debug command into java_g -debug. This is a historical consequence of supporting the Java WorkShop product.

Performance

This implementation uses an assembly-coded main interpreter loop for faster bytecode execution (however, the debug version java_g uses a C language interpreter), and a just-in-time compiler to further improve performance.

Fonts

When fonts are requested in a Java program that don't exactly match up with those available on a given SCO platform, the fonts displayed may look poor, like a smaller font scaled up. Why does this happen?

As an example, if a 28-point sansserif font is requested, the JDK 1.1.7B /usr/java/lib/font.properties entry for sansserif.plain.0 will specify a linotype font. On OpenServer 5 there is no linotype font, and even if adobe is substituted for linotype, no 28-point adobe font is available either.

The font.properties file will specify a couple of alternative fonts to look for:

sansserif.plain.0=-linotype-helvetica-medium-r-normal-sans-*-%d-*-*-p-*-iso8859-1
sansserif.1=-urw-itc 
        zapfdingbats-medium-r-normal--*-%d-*-*-p-*-sun-fontspecific
sansserif.2=--symbol-medium-r-normal--*-%d-*-*-p-*-sun-fontspecific

Java will try sansserif.1 and sansserif.2 if sansserif(.plain).0 could not be found.

If all of the above could not be found, JDK 1.1.7 will try the fonts that are as close as possible:

  1. specify FAMILY_NAME, WEIGHT_NAME, SLANT, POINT_SIZE, CHARSET_REGISTRY and CHARSET_ENCODING.
  2. change POINT_SIZE to PIXEL_SIZE
  3. change FAMILY_NAME to *
  4. specify only PIXEL_SIZE and CHARSET_REGISTRY/ENCODING
  5. change PIXEL_SIZE +1/-1/+2/-2...+4/-4
  6. default font pattern, which is " -*-helvetica-*-*-*-*-*-*-12-*-*-*-iso8859-1".

Some SCO users have found that the font.properties file Sun (and SCO) shipped in the JDK 1.1.3 series seems to work better than the JDK 1.1.7 series version. You can swap in the older file and see if it works better for you. You can also change your application to request a supported point size, for example 24 rather than 28. Also the SCO X libraries do support some scalable fonts so it's possible to to add a font to font.properties that will display the point size you want.

Note that SCO has made one modification to Sun's font.properties file in this JDK 1.1.7B release: to change all occurrences of "lucida sans" to "lucida". This fixes a pervasive problem with default text fonts having an ugly appearance.

You may find useful information on these and other font issues from Sun at http://java.sun.com/products/jdk/1.1/docs/guide/intl/fontprop.html.

Conformance

This release of SCO JDK 1.1.7B has passed Sun's Java Compatibility Kit (JCK) 1.1.6a test suite, which is the most recent version of JCK that is applicable to the Sun JDK 1.1.7B baseline.

SCO is committed to maintaining Java application compatibility across all platforms. SCO does not superset or subset the Java APIs as defined by Sun.

Known Problems

This section contains known problems or limitations with SCO's port of JDK 1.1.7B to SCO platforms. For known problems with Sun's JDK 1.1.x releases themselves, see the list at Sun's website.

  1. On some SCO platforms, the X11R6 implementation is currently built to only use TCP/IP as a connection mechanism. This means that even when working locally, you may need to issue an xhost +your_machine_name command.

  2. Large file support (for files > 2GB in size) is not yet present in the java.io package, nor anywhere else in the JDK.

  3. The jdb debugger does not always successfully bring up a debugging session in native threads mode. As an alternative, use jdb in green threads mode. If the program being tested needs to be run in native threads mode, run it as a separate process and then grab it with a green threads jdb.

  4. The jdb debugger terminates with a "The communication channel closed" diagnostic when breakpoints are set and the run command is used.

  5. Sometimes when graphical applications are displayed on a remote X platform by means of the DISPLAY environment variable, .gif or other image elements may be missing.

  6. In certain AWT applications, rapid usage of clipboard operations or modal dialog boxes can cause a hang. This can be prevented by defining the AWT_SINGLE_MODALWAIT environment variable before running the JVM, which forces modal dialog boxes to operate one at a time. However doing this may cause other valid AWT applications that expect to use multiple modal dialog boxes at once, to hang. Therefore this environment variable should not be defined unless you are sure that it will solve a problem you are having.

  7. In a Japanese locale, AWT Motif window titles show broken Japanese text. Also, the font height of labeled objects are not large enough to show Japanese characters.

  8. Some multiple-level executable/library structures that use JNI Invocation will not work correctly. In particular, an a.out that does a dlopen of a libuser.so that in turn invokes the Java virtual machine, will not work. An a.out that is linked with -luser but not -ljava that calls a libuser.so that in turn invokes the Java virtual machine, will also not work. Such an a.out must always be linked against -ljava itself. (See /usr/java/demo/jni_invoc_demo for the full set of system libraries to link against.)

  9. When the JIT is running, stack overflow will result in a segv violation and a core dump, rather than a StackOverflowError exception being thrown. You can detect this in the JIT case by examining %esp and seeing that it is out of range of the stack. The stack bounds are displayed by the debugger map command.

See also the restrictions and limitations on native methods.

[TOP]


Copyright © 1999 The Santa Cruz Operation, Inc. All Rights Reserved.