RELEASE NOTES - Java Development Kit(TM) 1.1.8_005 for SCO(R) Operating Systems


Java Development KitTM 1.1.8_005 for SCO® Operating Systems


Supported Platforms
Changes in This Release
The JRE Package
Using JDK 1.1.8 for SCO
Extensions to Sun JDK 1.1.8
The JIT Compiler
Threads: Green or Native
Native Methods
Debugging Native Methods and the JVM
Additional Implementation Notes
Known Problems


Please read the license.txt file for the license terms of this SCO® product.


These are the release notes for the Java Development KitTM (JDKTM), Release 1.1.8_005 for SCO Operating Systems, hereafter referred to as "JDK 1.1.8 for SCO".

This product is a full implementation of the Sun MicrosystemsTM Java Development Kit 1.1.8_005. (The _005 suffix indicates the patch level of the SunTM JDK that JDK 1.1.8 for SCO corresponds to; this particular patch contains several bug fixes from Sun.)

JDK 1.1.8 for SCO enables SCO OEMs, ISVs, and end users to develop and run applets and applications that conform to the JavaTM 1.1.x Core API.

JDK 1.1.8 for SCO has the same functionality as the Sun baseline JDK 1.1.8_005 release.

In addition, JDK 1.1.8 for SCO contains additional fixes and internal enhancements due to the Sun SolarisTM based JDK 1.1.3A and 1.1.3D releases.

Supported Platforms

The JDK 1.1.8 for SCO product will run on the following versions of SCO operating system platforms, with the indicated provisos:

JDK 1.1.8 for SCO cannot be used with older versions of the OSRcompat package, such as those that were released with UnixWare 7 Release 7.0.0 or Release 7.1.0.

JDK 1.1.8 for SCO is not supported on older versions of these SCO operating systems, such as SCO OpenServer Release 5.0.4 or UnixWare 7 Release 7.1.0, nor is it available for the SCO UnixWare® 2.1.3 operating system.

For the most part the JDK is identical for both platforms, and everything in these release notes applies to both platforms unless otherwise noted.

Changes in This Release

JDK 1.1.8 for SCO is a full update release to the earlier JDK 1.1.7B for SCO release. Compared to JDK 1.1.7B for SCO, this release contains:


JDK 1.1.8 for SCO is distributed in the following packages:

Package jdk118 includes the essential execution engine of Java, that is, what you need to run Java applications:

Package jdk118 also includes some Java development tools:

Finally, package jdk118 also includes additional components to support distributed applications and database access:

Package jdk118pls is an optional supplement to jdk118 that includes several kinds of additional materials useful in Java development work:

The JRE Package

Not included with the JDK 1.1.8 for distribution material, but available separately from SCO if required, is the JDK 1.1.8 for SCO JRE package jre118. Its size is about 7.7 MB.

This package matches the Sun product configuration of the Java Runtime Environment (JRE). The JRE is the minimum standard Java platform for running Java programs. It contains the Java virtual machine, Java core classes, and supporting files. The JRE does not contain any of the development tools (such as appletviewer or javac) or classes that pertain only to a development environment. It also contains a different configuration, with different file names, of the Java core classes. In addition, it uses the jre command rather than java to execute the Java virtual machine. See Sun documentation for full details or browse jre.html in the JRE installed directory.

The purpose of the JRE package is so that independent software vendors (ISVs) and others can bundle it with their Java application, if desired. That way, the application and the Java version it has been tested on can be installed together on customer machines, rather than relying on whatever Java version is currently installed on those machines.

The jre118 package is by default installed in /usr/jre/, with that actually being a symbolic link to /opt/jre-1.1.8/. Unlike the jdk118* packages, however, it is possible for an ISV or system administrator to change where the JRE is installed.

ISVs wanting to do this, would change the value of BASEDIR in the pkginfo file and then repackage the jre118 package. The JRE would then install into BASEDIR/opt/jre-1.1.8/. System administrators wanting to do this, would change the value of basedir in the /var/sadm/install/admin/default file, and then the JRE would install into basedir/opt/jre-1.1.8/. Or, they could copy /var/sadm/install/admin/default somewhere else, modify the value of basedir there, and then tell pkgadd to use that alternate file with the -a option. Finally, if basedir=ask is set in that file, then the pkgadd command will prompt the installer for the desired package base directory.

Note again that the JRE package is intended for use by ISVs; if you are an end user, you should simply use the regular jdk118 package (and jdk118pls package if desired) instead, which despite its "development kit" name also includes everything necessary for a Java runtime environment as well as for a Java development environment. (The Sun nomenclature is confusing in this area!) Note also that the regular jdk118 package also contains a version of the jre command, so that scripts that are written to work when the JRE is used will also work when the JDK is used.


JDK 1.1.8 for SCO is not available on any SCO CD-ROM media; it is only available by download from the SCO web site Instructions on how to download and uncompress the files involved will be given in a Getting Started page found at that web site.

There are a few differences in how the JDK is installed on each platform:

SCO OpenServer Release 5

If the UDK Compatibility Module for SCO OpenServer (package name OSRcompat), version 7.1.1, is not already installed on your system, you need to get it from either the 7.1.1 UDK CD-ROM (found in the SCO OpenServer Release 5.0.6 media kit) or from the SCO web download site at Then install that package using the pkgadd command.

When that installation is complete, install the core JDK (package name jdk118):

# pkgadd -d /full_dir_path/jdk118.ds

Then you can install the JDK additional materials, if desired (package name jdk118pls):

# pkgadd -d /full_dir_path/jdk118pls.ds

UnixWare 7 Release 7.1.1

Install the core JDK (package name jdk118):

# pkgadd -d /full_dir_path/jdk118.ds


# pkgadd -d /full_dir_path/jdk118pls.ds

Installation Location and Multiple JDK Versions

While the JDK is accessed through the /usr/java pathname, installation actually places its contents into /opt/jdk-1.1.8/. Then a symbolic link is made from /usr/java to /opt/jdk-1.1.8/.

You can have multiple "point" versions of JDK 1.1.x for SCO, such as JDK 1.1.7B and JDK 1.1.8, installed on a system at the same time. Installation of JDK 1.1.8 will not automatically remove your previous JDK point versions from the system. The only thing that is affected by the JDK 1.1.8 installation is the /usr/java link. By default, /usr/java will always point to the latest JDK.

After JDK 1.1.8 is installed, and if you had a previous version of the JDK, for example JDK 1.1.7B, the link of /usr/java to JDK 1.1.7B will be removed. /usr/java will point to JDK 1.1.8. If you remove JDK 1.1.8 from the system, and if you have another version of JDK on your system, for example JDK 1.1.7B, the /usr/java link will be restored to JDK 1.1.7B.

If you do want to access and use an alternate JDK on your system, simply invoke it by its /opt pathname. For example, /opt/jdk-1.1.7/bin/java will start up the JDK 1.1.7B virtual machine.


Documentation for the JDK 1.1.8 is contained in the jdk118pls package . All of the documentation is in HTML format and may be viewed with any browser you have installed on your system.

DocumentFile/Link Name
these release notesReleaseNotes.html
Sun documentation for JDK 1.1.8docs/index.html
Sun and SCO demos for JDK 1.1.8demo/
documentation on the SCO JDBC implementation
and the SCO SQL-Retriever product
see JDBC section

Note that the documentation included in the jdk118pls package is not integrated into the SCOhelp graphical help system.

Also note that much of this documentation is from Sun, but should be read in an SCO context. For instance, for "Solaris" read the SCO platforms you are using. For customer support, any of the normal SCO support mechanisms should be used, rather than contacting Sun.

Using JDK 1.1.8 for SCO

In general, use of JDK 1.1.8 for SCO follows that which is described in the Sun documentation.

After the JDK packages are installed, you probably want to set PATH in your .profile to include the directory where the JDK commands are installed, /usr/java/bin. On UnixWare 7 systems, this will usually have been done for you already when your account was created.

Extensions to Sun JDK 1.1.8

SCO has provided only one functional extension to the Sun JDK 1.1.8, and it is useful only on the UnixWare 7 platform.

Java Classes as First-Class Executables

When javac is used to compile one or more classes, it will set the execute permissions bit on for the .class file if the class contains a main method. (This happens on all JDK for SCO platforms.)

Then, on UnixWare 7 only, you can execute a Java application simply by giving the name of the main class:

$ foo.class
UnixWare 7 will look for foo.class by use of the PATH environment variable, just as it would for any other executable. foo.class must also be in the CLASSPATH, as in normal execution.

Furthermore, by making a hard link or symbolic link such as

$ ln -s foo.class foo
you will be able to execute the application simply by saying
$ foo
For instance, this gives you the ability let users invoke utilities without knowing the utilities are written in Java. For this to work you must keep the name prefix intact and the class file intact. That is, you have to keep foo.class somewhere, and then you can make a hard or soft link of foo to it. foo can be in another directory, but you can't change the name; i.e., you can't link bar to it. That's because once the system invokes the JVM, it expects to find a foo.class file there. For this same reason you also can't just rename foo.class to foo, because the JVM will still need a foo.class. (You could copy foo.class to foo, but that will of course waste disk space compared to a link.)

Of course, you can always use the traditional way of executing a Java application:

$ java foo
In this case, java must be in the PATH, and foo.class must be in the CLASSPATH.

The JIT Compiler

Historical note: There was no JIT compiler in the JDK 1.1 for SCO or 1.1.3 for SCO products. A JIT compiler was available in JDK 1.1.3u for SCO for UnixWare 7 and SCO UnixWare 2, but it was a separate package that required a Java WorkShop or Java Studio license. Due to complications with this license, no JIT compiler was available in JDK 1.1.3w for SCO or JDK 1.1.7A for SCO.

As of JDK 1.1.7B for SCO, and continuing in this JDK 1.1.8 for SCO product, the JIT compiler is part of the base JDK for SCO product and package jdk118. It is also available for SCO OpenServer Release 5.0.5 and higher for the first time. It does not have to be separately licensed or installed, and it is no longer tied to the Java WorkShop and Java Studio products.

A just-in-time (JIT) compiler improves Java performance by, as the program is executing (hence the name), compiling Java method bytecode to native x86 machine code. On subsequent executions of the method the machine code is executed instead of the bytecode being re-interpreted. Thus, a JIT compiler is a run-time component, not a development component (as conventional language compilers are).

A JIT compiler is part of the base JDK 1.1.8 for SCO product and package jdk118. It is always there and is always executing unless it is explicitly turned off by the user.

By default, this JIT compiler compiles or inlines every method in the program, the first time the method is invoked (except for a few primordial methods which cannot be compiled until the JIT compiler itself is loaded). Dynamically loaded classes are compiled after they are loaded. Class initializers are not compiled. If for some reason a method cannot be compiled, it is interpreted. Jitted code is not saved across sessions of the JVM.

How much the JIT improves performance depends greatly upon the nature of the application being run. Applications that are compute-intensive benefit the most, while those that are dominated by object creation/garbage collection, graphics, or networking tend to benefit less. The only way to know the benefit for your application is to measure it.

Controlling the JIT

The JIT compiler runs by default. If you want to suppress running of the JIT (either to do performance analysis or because you suspect it may be causing a problem), you can turn it off in two ways: by setting the JAVA_COMPILER environment variable to the empty value, or by setting the java.compiler property to the empty value. Examples:

$ java hello			# JIT will run

$ JAVA_COMPILER= java hello	# JIT will not run

$ java -Djava.compiler= hello	# JIT will not run

The environment variable JIT_ARGS provides further control over execution of the JIT. You can define JIT_ARGS as a string containing one or more of these space-separated options:

traceshow which methods are compiled or inlined
compile(method-list)compile just these methods
exclude(method-list)do not compile just these methods
dump(method-list)dump the generated assembly code
bco=offsuppress the bytecode optimizer part of the JIT

The method-list is a comma-separated list of method names, possibly qualified by class names. The class name part is separated by slashes. If only a class name is given, all methods of that class are selected. If no method-list is given, the option applies to all methods. Examples:

$ JIT_ARGS=trace java hello	# show what the JIT does

$ JIT_ARGS="dump(hello)" java hello	# show how the JIT does it (lots of output!)

$ JIT_ARGS="trace compile(myclass.)" java myclass	# only jit myclass's methods

$ JIT_ARGS="trace exclude(java/lang/System.exit)" java myclass	# jit everything except java.lang.System.exit()
In particular, JIT_ARGS=trace is useful in verifying that the JIT is installed correctly and is actually executing.

The bytecode optimizer is a particular part of the JIT; suppressing it may be useful for pinpointing the source of performance problems or other JIT problems.

JIT Heuristics

All JIT compilers come with a trade-off: the cost of the time it takes to stop execution of a method and do the compilation to machine code (once per method per program) versus the benefit of the time saved by subsequent invocations of that method running as machine code rather than being interpreted. For example, for a short method that is only called once, this trade-off is obviously a loss, while for a long method called many times, this trade-off is clearly a win.

As stated in The JIT Compiler above, by default the JIT compiles every method the first time it sees it. The environment variable JIT_MIN_TIMES can be set to a non-negative integer n to indicate a different approach: a method will not be jitted until the method has already executed at least n times. In other words, this is a heuristic that posits that those methods that have already been executed a lot, will be the ones that will tend to execute a lot for the rest of program execution. An example of its use would be:

$ JIT_MIN_TIMES=40 java my_app   # methods will be jitted after 40th time called

For some short- or medium-lived applications, use of JIT_MIN_TIMES may improve the performance of the JIT. One example is the javac Java language translator, which has now been set up to use the JIT heuristics mechanism.

Even for long-lived applications, use of JIT heuristics can improve the start-up time of the application. For instance, using JIT_MIN_TIMES=100 has been shown to improve the start-up time of some Swing-based applications by 30% or so, with no subsequent performance degradation compared to normal JIT operation.

The only way to find out for sure whether JIT heuristics will benefit your application is to experiment with it, using different minimum count values.

Threads: Green or Native

Threads are an essential part of the Java language and API set, and every Java implementation must decide how to implement Java threads. JDK 1.1.8 for SCO, like many other implementations, supports two alternate internal threads models: "green threads" and "native threads". Note that Java application code does not change at all from one model to the other; the threads model is an internal, "under the covers" difference, although one that can have an important impact on the behavior and performance of a Java application.

"Green threads" refers to a model where the Java virtual machine itself creates, manages, and context switches all Java threads within one operating system process. No operating system threads library is used.

"Native threads" refers to a model where the Java virtual machine creates and manages Java threads using the operating system threads library - named libthread on UnixWare 7 - and each Java thread is mapped to one threads library thread.

In JDK for SCO releases prior to JDK 1.1.3w only the green threads model was supported. As of JDK 1.1.3w for SCO and in all subsequent releases including this one, both models are supported (except on SCO OpenServer), and it is up to you to decide which to use for your application. Green threads is the default. To specify the threads model, set the THREADS_FLAG environment variable to either green or native. For convenience, the java command also has an option -green or -native that can be used; but for other commands, the environment variable must be used. Some examples:

$ java my_app				# green threads will be used

$ THREADS_FLAG=green java my_app	# green threads will be used

$ THREADS_FLAG=native java my_app	# native threads will be used

$ java -native my_app			# native threads will be used

$ THREADS_FLAG=native appletviewer my_applet.html   # only way to set native threads

Advantages of Green Threads

One reason to use green threads is that it is the more mature implementation.

Another reason to use it is that switching the threads model may change the behavior of the Java application. The Java language specification does not give a lot of precise details about how Java threads are scheduled, so there is some room for implementation dependencies in this area (unlike the rest of the Java specification). Java applications that (incorrectly) make assumptions about how threads will be scheduled may work under one threads model but not under the other. Since most applications up to this point have been written under green threads (that was the first model available on most platforms, including SCO), chances are that the native threads model would be more likely to expose incorrect application dependencies.

For both of the above reasons, green threads is the default implementation, at least for this release of the SCO JDK.

Finally, on a uniprocessor machine, green threads sometimes has performance advantages over native threads, although the difference tends to be relatively minor.

Advantages of Native Threads

There are two major potential advantages to using native threads, in addition to it intuitively being the "right way" to implement Java threads.

The first advantage is performance on multiprocessor (MP) machines. In green threads all Java threads execute within one operating system lightweight process (LWP), and thus UnixWare 7 has no ability to distribute execution of Java threads among the extra processors in an MP machine. But in the native threads model, each Java thread is mapped to a UnixWare 7 threads library multiplexed thread, and the threads library will indeed map those threads to different LWPs as they are available.

The performance benefit from using native threads on an MP machine can be dramatic. For example, using an artificial benchmark where Java threads are doing processing independent of each other, there can be a 3x overall speed improvement on a 4-CPU MP machine.

The second major advantage of native threads is when native methods are being used. In order for the green threads implementation to perform non-blocking I/O, a number of system calls are "wrapped" by the JVM to use green threads synchronization primitives and the like. If native methods make system calls in some way that the green threads JVM doesn't expect, these wrappers often cause severe problems. As a consequence, there are a number of restrictions placed upon native methods in green threads mode, as listed in the section Native Methods below.

In comparison, in native threads mode there is no need for I/O system call wrappers, and there are no restrictions upon what native methods may do, as long as they are coded to be thread-safe and are built with -Kthread.

A final advantage of native threads is that is usually gives a clearer picture of a program's activities when debugging at the native methods level with the UDK debugger.

Controlling Concurrency Level in Native Threads

When using multiplexed threads in the UnixWare 7 threads library, as the Java virtual machine does in native threads mode, the key to how much real concurrency is achieved is how many LWPs are active within the process. The JVM uses the threads library's thr_setconcurrency() interface to guide this, and there are three different schemes for controlling the number of LWPs.

By default, the number of LWPs created by the JVM is equal to the number of active processors in the machine. This is the minimum allocation that will take advantage of real parallelism in multiprocessor machines. This is also the behavior of the preceding JDK 1.1.7B for SCO release.

If the THR_INCR_CONC environment variable is set to nothing, or to the empty string:

$ THR_INCR_CONC= java my_app   
$ THR_INCR_CONC="" java my_app
then the number of LWPs is set dynamically as the Java application executes. The base level is one LWP for each processor in the machine, just as in the default scheme. Then, additional LWPs are requested as threads are blocked in certain system calls, these being sigwait recv accept waitid read poll, all of which tend to block their threads for a long time within the JVM. The request level is decremented once these calls return. This default behavior is a reasonable heuristic that on the one hand tries to maximize useful concurrency and reduce scheduling stress on the threads library, but on the other hand tries to conserve a dear system resource. The disadvantage of this scheme is that increasing the concurrency level can expose bugs in either the Java application or the JVM itself, that might not otherwise manifest themselves.

Finally, you can specify your own concurrency level, by setting the THR_INCR_CONC environment variable to a positive integer. An example would be:

$ THR_INCR_CONC=20 java my_app   
In this case the specified number of LWPs will be requested at the beginning of execution, and the requested level will be fixed for the rest of execution; in particular, the level will not vary based on threads blocking in system calls.

Native Methods

Both the JNI-style native methods added as of Sun JDK 1.1 and the old-style, lower-level native methods from Sun JDK 1.0.2 are supported in this release.

C and C++ native methods should be compiled and linked with the UnixWare and OpenServer Development Kit (UDK) from SCO. This UDK should be at version 7.1.0 or later.

This means that native methods cannot be built with the SCO OpenServer Release 5 Development System . Some of the reasons for this requirement include:

All of these items are satisfied by the UDK. The UDK can be used either on SCO OpenServer Release 5 itself, or native method dynamic libraries can be built with the UDK on UnixWare 7 Release 7.1.1 and then moved to SCO OpenServer Release 5 .

Another important limitation with native methods is upon the kinds of system operations that a native method can do when "green threads" is being used as the Java threads implementation model (see the Threads: Green or Native section above). Under green threads the following restrictions are in place:

None of these limitations exist with the "native threads" implementation model, so if you are coding native methods and that model is available to you, it is strongly recommended that you use it.

SCO UDK-specific examples of the commands needed to build old- and new-style native methods with C and C++ are included in the demos part of the JDK 1.1.8 distribution (when jdk118pls package is installed), in the directory /usr/java/demo/,under the subdirectories

The last directory gives an example for C and C++ of the JNI Invocation API. It is highly recommended that you follow the command invocations given in these examples, for unless the native code is built correctly, it will not work as intended.

Native methods using GCC

Another compiler system that can be used for building native methods is the GNU GCC compiler, described at

If native method or invocation code is written in C, the GNU gcc compiler that generates code for the UDK environment may be used. On UnixWare 7, this is the gcc that SCO builds and that is available for UnixWare 7 from On SCO OpenServer Release 5, the regular SCO-built gcc for OpenServer cannot be used; instead, you must build the OpenServer-hosted, UDK-targeted gcc described at*-udk. When building native methods on either platform, use the GNU options -shared -fPIC instead of the UDK options -G -KPIC.

If native method code code is written in C++, the GNU g++ compiler that generates code for the UDK environment may be used, with the same guidelines as for C. (Note that some SCO-built g++ releases have problems building C++ code with the -shared option, which should be corrected in later releases.)

If JNI invocation code is written in C++, the GNU g++ compiler cannot be used. This is because part of the JDK itself is built with the UDK C++ compiler, and in the JNI invocation context, the two C++ runtime systems cannot coexist together. (This problem does not occur in the native method context because there the C++ runtime usages are isolated to dlopen'd native code libraries that do not have any visibility to each other. This is not the case in JNI invocation where the JVM is visibly linked against the C or C++ main program.)

Debugging Native Methods and the JVM

Debugging of Java applications is done with the JDK-provided jdb debugger, as described in the relevant Sun documentation.

Debugging of C or C++ native methods, however, must be done with the UDK debugger; this section describes how to go about this. This discussion is also applicable to isolating or troubleshooting potential problems within the JVM itself, since the lower layers of the JVM are in effect C native methods.

Core dumps should never occur within Java. If they do, then either there is an application bug in a native method, or there is an internal bug within the JVM, or there is an internal bug within an SCO operating system library. Java core dumps tend to be large; you may need to set ulimit -c unlimited to avoid having the core file be truncated (typically to 16 MB) and thus be unreadable by the UDK debugger.

Core dumps from the JVM will usually have a few levels of signal handlers on the stack subsequent to the actual point of failure. This is true in both green threads and native threads modes. An example would be:

$ debug -c  core.993 /usr/java/bin/x86at/green_threads/java_g
Core image of java_g (process p1) created
CORE FILE [__lwp_kill]
Signal: sigabrt
        0xbffc8a72 (__lwp_kill+12:)      ret
debug> stack
Stack Trace for p1, Program java_g
*[0] __lwp_kill(0x1, 0x6)       [0xbffc8a72]
 [1] sysAbort(presumed: 0xbfffdbb4, 0xbf753ca0, 0)      [../../../../src/unixware/java/runtime/system_md.c@283]
 [2] signalHandlerPanic(sig=8, info=0x8046f00, uc=0x8046d00)    [../../../../src/unixware/java/green_threads/src/interrupt_md.c@491]
 [3] _sigacthandler(presumed: 0x8, 0x8046f00, 0x8046d00)        [0xbffb6831]
 [4] nfib_fib(s=0xbf708bc8, n=0, presumed: 0)   [fib.C@27]
 [5] JIT_CALLBACK1_MARKER()     [0xbf4c8fa8]

The actual point of failure is at frame level [4] in this case. Note also that when the JIT is in use, you don't see the rest of the stack. If you turn off the JIT, then you can see it, but it will just be a bunch of internal routines inside the JVM (with names like do_execute_java_method ) that won't tell you much. In other words, there is no debugging tool available that will show you both the Java stack and the native methods stack at the same time.

However, when using native threads, you can make use of the JVM's Java thread dumps for this purpose. These are what you see when a Java process aborts; you can also get one by doing ^\ (control-backslash) at the command line as a Java application is running (this is especially useful if a JVM process is hung, frozen, or looping) or equivalently by sending a SIGQUIT signal to the process, such as with the kill -3 command. (You can also get some of the same information by grabbing with or running under jdb.)

You'll see entries in the Java threads dump like

    "Thread-464" (TID:0xbf0a1cc8, sys_thread_t:0x8472010, state:R, thread_t: t@466, sp:0xbaee3498 threadID:0x0, stack_base:0xbaee3c64, stack_size:0x20000) prio=5 Code) Code)
In native threads, the "thread_t: t@466" contains the thread number (466 in this case), and is your link between this dump and what the UDK debugger produces. When you use the UDK debugger ps command, the number in the "Thread" column is the same as this thread number. (This is different from the "ID" column, which has the p1.nn identification used in debugger commands, but you can use the ps output to correlate the two.)

In some cases, an abort() call can completely obscure the actual place of failure, especially if the JVM runs into further trouble while trying to print the Java thread dumps. In this case, when using native threads, you can define the environment variable JVM_NOT_HANDLE_SIGABRT; this will cause an immediate core dump and exit from the point of the abort, without the JVM getting involved. Then bringing up the debugger on the core file and doing a stack trace on the current thread will show you to the exact location of failure.

Java threads dump can also be produced when using green threads, but there is no good way to associate the Java thread stacks with native method thread stacks, since the latter are not visible to the debugger. The JVM_NOT_HANDLE_SIGABRT environment variable has no effect in green threads.

Of course, to do real native methods debugging you'll want to run the JVM from within the debugger. For this you'll need to invoke the JVM executable directly. First, you should use the java_g version of the JVM, since that contains debugging information. Second, if you look at /usr/java/bin/java_g, you'll see that it's a link to a script called .java_wrapper, that sets up the LD_LIBRARY_PATH, CLASSPATH, and JAVA_HOME environment variables before calling the actual JVM executable in /usr/java/bin/x86at/green_threads/java_g.

If you invoke /usr/java/bin/java_g through ksh -x you'll see the values those environment variables are set to; you can set those manually at the command line (store in a script that you "dot" if you debug frequently), then invoke the debugger:

$ . setup_java	# your script to set LD_LIBRARY_PATH and CLASSPATH
$ debug -ic	# or can use graphical version
debug> create /usr/java/bin/x86at/green_threads/java_g my_app
debug> run

Another complication sets in when you want to use symbols (to set breakpoints on, for instance) that are outside of the JVM, such as in native methods. The dynamic libraries that contain native methods are loaded by the JVM via the dlopen call, and until this happens, symbols in the native methods won't be visible to the debugger.

The solution to this is to set a breakpoint inside the JVM at the point where the dynamic library has been loaded, but before code in the libraries is called. For JDK 1.1.8 for SCO the appropriate breakpoint is linker_md.c@199. Here is an example demonstrating both the problem and the solution:

$ debug -ic
debug> create /usr/java/bin/x86at/green_threads/java_g my_app
debug> stop my_nativemethod_function
Error: No entry "my_nativemethod_function" exists

debug> stop linker_md.c@199
EVENT [1] assigned
debug> run
STOP EVENT TRIGGERED: linker_md.c@199  in p1 [sysAddDLSegment in ../../../../src/unixware/java/runtime/linker_md.c]
199:        dlsegment[useddlsegments].fname = strdup(fn);
debug> stop my_nativemethod_function
EVENT [2] assigned
debug> run
STOP EVENT TRIGGERED: my_nativemethod_function in p1 [my_nativemethod_function in myfile.C]
68:         bool finished = false;
You can debug normally from that point on.

If you do a lot of this kind of debugging it can be useful to set up an alias in your ~/.debugrc file:

alias cnm create /usr/java/bin/x86at/green_threads/java_g $1 ; run -u linker_md.c@199
Then just giving the cnm some_class command to the debugger will bring you to the point where you can set breakpoints in your native method code.

This technique of using an alias can allow you to define a whole series of convenience commands to set up a typical native methods debugging session. An example of a full .debugrc alias for JVM green threads debugging might be look something like:

alias cjvm set $CLASSPATH=".:/home/whatever/java:/usr/java/lib/" ; export $CLASSPATH ; 
	set $LD_LIBRARY_PATH="/usr/java/lib/x86at/green_threads:/usr/lib:/usr/X/lib" ;  export $LD_LIBRARY_PATH ; 
	set $JAVA_HOME="/usr/java"; export $JAVA_HOME ;
	create -f none /usr/java/bin/x86at/green_threads/java_g $1 $2 $3 $4 $5 $6 $7 $8 ; 
	set %stack_bounds=no ; signal -i cld poll alrm SIGUSR1 ; 
	run -u linker_md.c@199

The setting of the CLASSPATH, LD_LIBRARY_PATH, and JAVA_HOME environment variables follows the discussion above. The create -f none command tells the debugger to ignore child processes caused by forks done within the X Window System libraries. The stack_bounds setting avoids spurious warnings due to jitted code being executed. The signal -i command keeps the debugger from stopping on innocuous signals that the JVM handles.

For debugging when the JVM is using native threads, simply change the green to native in the above paths. You will probably also want to add a

	set %thread_change=ignore ;
statement as well, depending upon what you are trying to debug.


Java Database Connectivity is a standard SQL database access interface for Java, providing uniform access for Java applications to a wide range of relational databases.

JDK 1.1.8 for SCO contains the SCO implementation of JDBC and includes the SCO JDBC driver. This implementation conforms to the Sun JDBC 1.2 specification. The SCO JDBC implementation is built upon the SCO SQL-Retriever product. For more information on SCO SQL-Retriever, please visit .

There is no need to separately install the SCO JDBC implementation, since it is part of the jdk118 installation. It is necessary to separately install the SQL-Retriever product if you are interested in using JDBC.

Additional Implementation Notes

In general one of the important characteristics of Java is that it behaves in exactly the same fashion on all platforms. However there are a few areas where it may be useful to know how the JDK has been implemented on SCO platforms. Some of these have already been discussed above; others are described here.

System Properties

If it is necessary for application code to determine which SCO platform it is running on, the Java class System.Properties can be queried. Here are some of the values that will be returned on all SCO platforms:

while here are values that are specific to SCO OpenServer Release 5.0.5:


and UnixWare 7 Release 7.1.1:


Abstract Windowing Toolkit

This implementation uses the X Window System, version X11R6.1, to implement the Java Abstract Windowing Toolkit.

Java heap size defaults

While this hasn't been documented, all previous JDK 1.1.x for SCO releases have used somewhat different default values for the java command -ms (8m on SCO OpenServer, 1m elsewhere) and -mx (8m) options that govern the initial and maximum Java heap sizes, than what Sun uses in their JDK 1.1.x baseline releases.

As of JDK 1.1.8 for SCO, the same default values are used as Sun uses: -ms1m and -mx16m.

java -debug

This implementation changes any use of the java -debug command into java_g -debug. This is a historical consequence of supporting the Java WorkShop product.


This implementation uses an assembly-coded main interpreter loop for faster bytecode execution (however, the debug version java_g uses a C language interpreter), and a just-in-time compiler to further improve performance.


When fonts are requested in a Java program that don't exactly match up with those available on a given SCO platform, the fonts displayed may look poor, like a smaller font scaled up. Why does this happen?

As an example, if a 28-point sansserif font is requested, the JDK 1.1.8 /usr/java/lib/ entry for sansserif.plain.0 will specify a linotype font. On SCO OpenServer Release 5 there is no linotype font, and even if adobe is substituted for linotype, no 28-point adobe font is available either.

The file will specify a couple of alternative fonts to look for:


Java will try sansserif.1 and sansserif.2 if sansserif(.plain).0 could not be found.

If all of the above could not be found, JDK 1.1.8 will try the fonts that are as close as possible:

  2. change POINT_SIZE to PIXEL_SIZE
  3. change FAMILY_NAME to *
  5. change PIXEL_SIZE +1/-1/+2/-2...+4/-4
  6. default font pattern, which is " -*-helvetica-*-*-*-*-*-*-12-*-*-*-iso8859-1".

Some SCO users have found that the file Sun (and SCO) shipped in the JDK 1.1.3 series seems to work better than the JDK 1.1.7 and 1.1.8 series versions. You can swap in the older file and see if it works better for you. You can also change your application to request a supported point size, for example 24 rather than 28. Also the SCO X Window System libraries do support some scalable fonts so it is possible to to add a font to that will display the point size you want.

Note that SCO has made one modification to the Sun file in this JDK 1.1.8 release: to change all occurrences of "lucida sans" to "lucida". This fixes a pervasive problem with default text fonts having an ugly appearance.

You may find useful information on these and other font issues from Sun at


This release of JDK 1.1.8 for SCO has passed the Sun Java Compatibility Kit (JCK) 1.1.8 test suite, which is the most recent version of JCK that is applicable to the Sun JDK 1.1.8 baseline.

SCO is committed to maintaining Java application compatibility across all platforms. SCO does not superset or subset the Java APIs as defined by Sun.

Known Problems

This section contains known problems or limitations with the SCO port of JDK 1.1.8 to SCO platforms. For known problems with the Sun JDK 1.1.x releases themselves, see the list at the Sun website.

  1. On some SCO platforms, the X11R6 implementation is currently built to only use TCP/IP as a connection mechanism. This means that even when working locally, you may need to issue an xhost +your_machine_name command.

  2. Large file support (for files > 2GB in size) is not yet present in the package, nor anywhere else in the JDK.

  3. In certain AWT applications, rapid usage of clipboard operations or modal dialog boxes can cause a hang. This can be prevented by defining the AWT_SINGLE_MODALWAIT environment variable before running the JVM, which forces modal dialog boxes to operate one at a time. However doing this may cause other valid AWT applications that expect to use multiple modal dialog boxes at once, to hang. Therefore this environment variable should not be defined unless you are sure that it will solve a problem you are having.

  4. In a Japanese locale, AWT Motif window titles show broken Japanese text. Also, the font height of labeled objects are not large enough to show Japanese characters.

  5. Some multiple-level executable/library structures that use JNI Invocation will not work correctly. In particular, an a.out that does a dlopen of a that in turn invokes the Java virtual machine, will not work. An a.out that is linked with -luser but not -ljava that calls a that in turn invokes the Java virtual machine, will also not work. Such an a.out must always be linked against -ljava itself. (See /usr/java/demo/jni_invoc_demo for the full set of system libraries to link against.)

  6. When the JIT is running and old-style JDK 1.0.2 native methods (NMI) are being used, the first argument to a static native method (which represents the this reference) will not be NULL as it should be. The impact of this is negligible, though, since it is very unlikely that NMI native methods would dynamically test against this argument to know whether they are static or not.

  7. When the JIT is running, Java frames are allocated on the "native" thread stack (the size of which is governed by the java -ss option), while when the JIT is not running, Java frames are allocated on the "Java" thread stack (the size of which is governed by the java -oss option). Since these stacks have different default sizes, it is possible for an application to experience a StackOverflowError exception when the JIT is running but not otherwise. If this happens, adjust the native thread stack size accordingly.

  8. Support for multicast routing is not available on SCO OpenServer Release 5. This is not problem with JDK 1.1.8 per se, just that SCO OpenServer Release 5 doesn't support it.

  9. On SCO OpenServer Release 5.0.5, after a comprehensive multicast Java application is run, the MulticastSocket.leaveGroup method may not relinquish the socket's membership in the group due to some unknown operating system or hardware related problem. Rebooting the machine will resolve this behavior.

  10. On SCO OpenServer Release 5.0.5, Swedish language/keyboard settings do not by themselves allow Swedish-specific characters to be input into a Java AWT application. Three things need to be additionally done: Modify /udk/usr/X/lib/X11/locale/locale.alias to have the proper OpenServer entry for Swedish in it, "swedish_sweden.8859 sv_SE.ISO8859-1". Make a symbolic link /udk/usr/lib/locale/swedish_sweden.8859 to /udk/usr/lib/locale/sv. Set and export the environment variable LANG to swedish_sweden.8859. Similar considerations may apply for other languages. This is not problem with JDK 1.1.8 per se, but rather with the OSRcompat package and locale support.

  11. Normally C and C++ applications on SCO operating systems, and the system libraries on those operating systems, can survive the dereference of a null or low (in the first 4 KB page of memory) pointer variable. (On UnixWare 7 this behavior can be changed by the nullptr disable command.) This is also true of the JVM when the JIT is not running. However, in order to efficiently implement Java language semantics, when the JIT starts it protects the first page of memory against both read and write access. Thus, if there is a null or low pointer dereference in either application native method code or in an SCO operating system library, the JVM will get a SIGSEGV core dump when the JIT is on, but will appear to work correctly when the JIT is off. This is not a bug in the JVM or the JIT -- the bug is in the application native method or the SCO system library that is doing the bad pointer dereference, and the UDK debugger stack trace on the core file will show that.

  12. It has been reported that starting the JVM with the -debug option may sometimes lead to a significant memory leak within the Java heap, and that this might be related to classes being unloaded and garbage collected, but this has not been verified.

  13. Two (probably different) SIGSEGV core dumps during garbage collection have been observed during extensive load testing by beta users of this release. These core dumps remain unexplained.

  14. There is a known but apparently very infrequent problem in libthread where a signal intended to be delivered to one multiplexed thread is instead delivered to another. This can cause the JVM fail in various possible ways, but most likely by hanging during garbage collection. This problem has been observed once during extensive load testing by beta users of this release.

See also the restrictions and limitations on native methods.


Last Updated: 11/15/2000

Copyright 2000 The Santa Cruz Operation, Inc. All Rights Reserved.

SCO, SCO OpenServer, and UnixWare are registered trademarks of The Santa Cruz Operation, Inc. in the U.S.A. and other countries. Sun, Sun Microsystems, Solaris, Java, and JDK are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries, and are used under license. NonStop is a registered trademark of Compaq Computer Corporation. X Window System is a trademark of the Massachusetts Institute of Technology.

The Santa Cruz Operation, Inc. reserves the right to change or modify any of the product or service specifications or features described herein without notice. This document is for information only. No express or implied representations or warranties are made in this document.