User’s Guide for VirtualGL 2.0.1 and TurboVNC 0.3.3

Intended audience: System Administrators, Graphics Programmers, Researchers, and others with knowledge of the Linux or Solaris operating systems, OpenGL and GLX, and X windows.

Table of Contents


1 Legal Information

somerights20

This document and all associated illustrations are licensed under the Creative Commons Attribution 2.5 License. Any works which contain material derived from this document must cite The VirtualGL Project as the source of the material and list the current URL for the VirtualGL web-site.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/.) Further information is contained in LICENSE-OpenSSL.txt, which can be found in the same directory as this documentation.

VirtualGL is licensed under the wxWindows Library License, v3, a derivative of the GNU Lesser General Public License (LGPL).


2 Overview

VirtualGL is an open source package which gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration. Some remote display software, such as VNC, lacks the ability to run OpenGL applications entirely. Other remote display software forces OpenGL applications to use a slow software-only OpenGL renderer, to the detriment of both performance as well as compatibility. And running OpenGL applications using the traditional remote X-Windows approach causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine, which is not a tenable proposition unless the data is relatively small and static, unless the network is fast, and unless the OpenGL application is specifically tuned for a remote X-Windows environment.

With VirtualGL, the OpenGL commands and 3D data are instead redirected to a 3D graphics accelerator on the server machine, and only the rendered 3D images are sent to the client machine. VirtualGL thus “virtualizes” 3D graphics hardware, allowing it to be co-located in the “cold room” with compute and storage resources. VirtualGL also allows 3D graphics hardware to be shared among multiple users, and it provides real-time performance on even the most modest of networks. This makes it possible for large, noisy, hot 3D workstations to be replaced with laptops or even thinner clients, but more importantly, it eliminates the workstation and the network as barriers to data size. Users can now visualize gigabytes and gigabytes of data in real time without needing to cache any of the data locally or sit in front of the machine that is rendering the data.

VirtualGL has two basic modes of operation: “Direct” Mode and “Raw” Mode. In both modes, a separate X-Windows server (or X-Windows proxy) is used to display the application’s GUI and to provide keyboard/mouse interaction.

1. “Direct” Mode
Direct Mode is most often used whenever the X server is located across the network from the graphics server, for instance if the X server is running on the user’s desktop machine. In this case, the 3D application sends X11 commands across the network to the X server in order to display the application GUI, and the application receives X11 events back from the X server in order to respond to keyboard and mouse interaction from the user. Normally, a 3D application would also send GLX commands across the network to the X server in order to establish an OpenGL rendering context on the client machine. Such an “indirect” OpenGL context could then be used to tunnel OpenGL commands and 3D data inside the X-Windows protocol stream. But VirtualGL instead intercepts the GLX commands from the application so that it can force the OpenGL rendering context to be established in an invisible pixel buffer (“Pbuffer”) on a 3D graphics card in the server machine. VirtualGL also monitors buffer swaps and other commands from the application in order to determine when the application has finished rendering a frame. When such an end-of-frame trigger is detected, VGL reads back the rendered frame from the 3D graphics card, compresses it using a high-speed image codec, and sends the compressed image directly to the client machine using a dedicated TCP socket. A separate VirtualGL Client application runs on the client machine, and this VirtualGL Client application decompresses the compressed frame and composites it into the appropriate X window.

Direct Mode is the fastest solution for running VirtualGL on a local area network, and it provides the same usability as running the 3D application “locally” on a workstation. Direct Mode does not work particularly well on a high-latency or low-bandwidth network due to its reliance on the remote X-Windows protocol for displaying the application’s GUI. However, it works great with any modest laptop and 802.11g wireless or 100 Megabit Ethernet.

Figure 2.1: VirtualGL in Direct Mode

directmode

2. “Raw” Mode
In this mode, VirtualGL does not compress the rendered 3D images itself but rather sends them in uncompressed form to an X server. This is most useful in conjunction with an “X Proxy”, which can be one of any number of Unix remote display applications, such as VNC. These X proxies are essentially “virtual” X servers. They appear to the application to be a normal X server, but they perform X11 rendering to a virtual framebuffer in the server machine’s memory rather than to a real hardware framebuffer. This allows the X proxy to send only images to the client machine rather than chatty X-Windows commands. As with Direct Mode, VirtualGL intercepts the GLX calls from the application and thus forces the application to render into an OpenGL Pbuffer located on the server machine’s graphics card. VirtualGL also reads back the rendered images, as with Direct Mode. But in Raw Mode, VirtualGL draws the rendered 3D images into the X server as uncompressed 2D bitmaps. If the X server is really an X proxy, then VirtualGL relies on this X proxy to compress the images and send them to the client(s). Since the use of an X proxy eliminates the need to send X-Windows commands over the network, this is the best means of using VirtualGL when the network has high latency or low bandwidth. The VirtualGL project provides an accelerated version of VNC, called “TurboVNC”, which is meant to be used with VirtualGL in Raw Mode on such networks. TurboVNC also provides rudimentary collaboration capabilities, allowing multiple users to simultaneously interact with the same 3D application. But whereas Direct Mode produces a similar user experience to running the 3D application “locally”, VNC and most other X proxies require the user to interact with a “desktop in a window”, which is not a completely seamless experience.

Raw Mode, in conjunction with an X proxy, is typically used to run data-intensive 3D applications in a “cold room” and remotely interact with these applications from a PC or laptop located in another city. However, Raw Mode can also be used to transmit images directly to the client machine, if the network is sufficiently fast (Gigabit Ethernet, for instance.)

Figure 2.2: VirtualGL in Raw Mode with an X proxy on the graphics server

rawmodetoxproxy

Figure 2.3: VirtualGL in Raw Mode with an X proxy on a different server

rawmodeoverservernetwork

Figure 2.4: VirtualGL in Raw Mode with no X proxy

rawmodeovernetwork

3 System Requirements

3.1 Linux/x86

Server (x86) Server (x86-64) Client
Recommended CPU Pentium 4, 1.7 GHz or faster (or equivalent)
  • For optimal performance, the processor should support SSE2 extensions.
  • Dual processors recommended
Pentium 4/Xeon with EM64T, or…
AMD Opteron or Athlon64, 1.8 GHz or faster
  • For optimal performance with 64-bit VirtualGL, the processor should support SSE3 extensions. Only newer AMD 64-bit processors (mid-2005 and later) support SSE3.
  • Dual processors recommended
Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent)
Graphics Any decent 3D graphics card that supports Pbuffers
  • Tested with various nVidia cards
  • Install the vendor drivers for the server’s 3D graphics card. Do not use the drivers that ship with Linux, as these do not provide 3D acceleration or Pbuffer support.
Any graphics card with decent 2D performance
  • If using a 3D graphics card, install the vendor drivers for that 3D graphics card.
Recommended O/S
  • Any distribution in the RedHat or SuSE families (including Fedora, CentOS, and White Box)
  • Specifically tested with RedHat Enterprise Linux 2.1, RedHat/CentOS Enterprise Linux 3, 4, & 5 (32-bit and 64-bit), and SuSE Linux Enterprise 9 & 10 (32-bit and 64-bit)
Other Software X server configured to export True Color (24-bit or 32-bit) visuals

3.2 Linux/Itanium

VirtualGL should build and run on Itanium Linux, but it has not been thoroughly tested. Contact us if you encounter any difficulties.

3.3 Solaris/x86

Server Client
Recommended CPU Pentium 4/Xeon with EM64T, or…
AMD Opteron or Athlon64, 1.8 GHz or faster
  • Dual processors recommended
Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent)
Graphics nVidia 3D graphics card Any graphics card with decent 2D performance
O/S Solaris 10 or higher
Other Software
  • Sun mediaLib (v2.4 or higher recommended *)
  • Solaris Patch 118345-04 (or later)
  • X server configured to export True Color (24-bit or 32-bit) visuals
  • Sun mediaLib (v2.4 or higher recommended *)
  • X server configured to export True Color (24-bit or 32-bit) visuals

* Solaris 10/x86 comes with mediaLib pre-installed, but it is strongly recommended that you upgrade this version of mediaLib to at least 2.4. This will greatly increase the performance of Solaris/x86 VirtualGL clients as well as the performance of 32-bit applications on Solaris/x86 VirtualGL servers.

3.4 Solaris/Sparc

Server Client
Recommended CPU UltraSPARC III 900 MHz or faster
  • Dual processors recommended
UltraSPARC III 900 MHz or faster
Graphics Any decent 3D graphics card that supports Pbuffers Any graphics card with decent 2D performance
O/S Solaris 8 or higher
Other Software
  • Sun mediaLib
  • Sun OpenGL 1.3 or later (1.5 or later required for GLP)
  • If your system does not ship with SSh pre-installed (older Solaris 8 and 9 systems don’t), then download and install an OpenSSH package from Blastwave or http://www.sunfreeware.com/.
  • X server configured to export True Color (24-bit or 32-bit) visuals (if not using GLP)
Recommended Patches
  • OpenGL 1.5: 120812-15 (or later)
  • XVR-2500 driver: 120928-15 (or later)
  • OpenGL 1.3, 32-bit: 113886-41 (or later)
  • OpenGL 1.3, 64-bit: 113887-41 (or later)
  • Sun mediaLib
  • Sun OpenGL 1.3 or later recommended if the client has a 3D graphics card installed. If available, the VirtualGL Direct Mode client will use OpenGL to draw images, which improves the client’s performance on Sun 3D graphics cards.
  • If your system does not ship with SSh pre-installed (older Solaris 8 and 9 systems don’t), then download and install an OpenSSH package from Blastwave or http://www.sunfreeware.com/.
  • X server configured to export True Color (24-bit or 32-bit) visuals

3.5 Windows

Client
Recommended CPU Pentium III or Pentium 4, 1.0 GHz or faster (or equivalent)
Graphics Any graphics card with decent 2D performance
O/S Windows 2000 or later
Other Software
  • Direct Mode Only: Hummingbird Exceed 8 or newer required
  • Secure Shell (SSh) client
  • Client display must have a 24-bit or 32-bit color depth (True Color.)

3.6 Additional Requirements for Stereographic rendering

Server Client
Linux 3D graphics card that supports stereo (example: nVidia Quadro) and is configured to export stereo visuals
Solaris/x86
Solaris/Sparc
  • 3D graphics card that supports stereo (examples: XVR-1200, XVR-2500) and is configured to export stereo visuals
  • Sun OpenGL 1.3 or later
Windows N/A
  • 3D graphics card that supports stereo (examples: nVidia Quadro, 3D Labs Wildcat Realizm) and is configured to export stereo pixel formats
  • Hummingbird Exceed 3D v8 or newer

3.7 Additional Requirements for Transparent Overlays

Client
Linux 3D graphics card that supports transparent overlays (example: nVidia Quadro) and is configured to export overlay visuals
Solaris/x86
Solaris/Sparc
  • 3D graphics card that supports transparent overlays (examples: XVR-1200, XVR-2500) and is configured to export overlay visuals
  • Sun OpenGL 1.3 or later
Windows
  • 3D graphics card that supports transparent overlays (examples: nVidia Quadro, 3D Labs Wildcat Realizm) and is configured to export overlay pixel formats
  • Hummingbird Exceed 3D v8 or newer

4 Obtaining and Installing VirtualGL

VirtualGL must be installed on any machine that will act as a VirtualGL server or as a VirtualGL Direct Mode client. It is not necessary to install VirtualGL on the client machine if Raw Mode is to be used.

4.1 Installing VirtualGL on Linux

Installing TurboJPEG

  1. Download the TurboJPEG RPM package (turbojpeg-{version}.i386.rpm for 32-bit systems and turbojpeg-{version}.x86_64.rpm for 64-bit systems) from the files area of the VirtualGL SourceForge web-site.

    The 64-bit RPM provides both 32-bit and 64-bit TurboJPEG libraries.

    .tgz packages are provided for users of non-RPM-based Linux distributions. You can use alien to convert these into .deb packages if you prefer.

  2. Log in as root, cd to the directory where you downloaded the RPM package, and issue the following commands:
    rpm -U turbojpeg*.rpm
    

Installing VirtualGL

  1. Download the VirtualGL RPM package (VirtualGL-{version}.i386.rpm for 32-bit systems and VirtualGL-{version}.x86_64.rpm for 64-bit systems) from the files area of the VirtualGL SourceForge web-site.

    The 64-bit RPM provides both 32-bit and 64-bit VirtualGL components.

  2. Log in as root, cd to the directory where you downloaded the RPM package, and issue the following commands:
    rpm -e VirtualGL
    rpm -i VirtualGL*.rpm
    

4.2 Installing VirtualGL on Solaris

  1. Download the VirtualGL Solaris package (SUNWvgl-{version}.pkg.bz2 for Sparc and SUNWvgl-{version}-x86.pkg.bz2 for x86) from the files area of the VirtualGL SourceForge web-site.

    Both packages provide both 32-bit and 64-bit VirtualGL components.

  2. Log in as root, cd to the directory where you downloaded the package, and issue the following commands:
    pkgrm SUNWvgl
    
    (answer “Y” when prompted.)
    bzip2 -d SUNWvgl-{version}.pkg.bz2
    pkgadd -d SUNWvgl-{version}.pkg
    
    Select the SUNWvgl package (usually option 1) from the menu.

VirtualGL for Solaris installs into /opt/SUNWvgl.

4.3 Installing VirtualGL on Windows (Client Only)

  1. Download the VirtualGL Windows installer package (VirtualGL-{version}.exe) from the files area of the VirtualGL SourceForge web-site.
  2. Run the VirtualGL installer. The installation of VirtualGL should be self-explanatory. The only configuration option is the directory into which you want the files to be installed.

4.4 Installing VirtualGL from Source

If you are using a non-RPM-based distribution of Linux or another platform for which there is not a pre-built VirtualGL binary package available, then log in as root, download the VirtualGL source tarball (VirtualGL-{version}.tar.gz) from the files area of the VirtualGL SourceForge web-site, uncompress it, cd vgl, and read the contents of BUILDING.txt for further instructions on how to build and install VirtualGL from source.

4.5 Uninstalling VirtualGL

Linux

As root, issue the following command:

rpm -e VirtualGL

Solaris

As root, issue the following command:

pkgrm SUNWvgl

Answer “yes” when prompted.

Windows

Use the Add or Remove Programs applet in the Control Panel.


5 Obtaining and Installing TurboVNC

TurboVNC must be installed on any machine that will act as a TurboVNC server or client. It is not necessary to install TurboVNC to use VirtualGL in Direct Mode. Also, TurboVNC need not necessarily be installed on the same server as VirtualGL.

5.1 Installing TurboVNC on Linux

Installing TurboJPEG

  1. Download the TurboJPEG RPM package (turbojpeg-{version}.i386.rpm for 32-bit systems and turbojpeg-{version}.x86_64.rpm for 64-bit systems) from the files area of the VirtualGL SourceForge web-site.

    The 64-bit RPM provides both 32-bit and 64-bit TurboJPEG libraries.

    .tgz packages are provided for users of non-RPM-based Linux distributions. You can use alien to convert these into .deb packages if you prefer.

  2. Log in as root, cd to the directory where you downloaded the RPM package, and issue the following command:
    rpm -U turbojpeg*.rpm
    

Installing TurboVNC

  1. Download the TurboVNC RPM package (turbovnc-{version}.i386.rpm) from the files area of the VirtualGL SourceForge web-site.
  2. Log in as root, cd to the directory where you downloaded the RPM package, and issue the following command:
    rpm -U turbovnc*.rpm
    

5.2 Installing TurboVNC on Solaris

  1. Download the TurboVNC Solaris package (SUNWtvnc-{version}.pkg.bz2 for Sparc and SUNWtvnc-{version}-x86.pkg.bz2 for x86) from the files area of the VirtualGL SourceForge web-site.
  2. Log in as root, cd to the directory where you downloaded the package, and issue the following commands:
    pkgrm SUNWvgl
    
    (answer “Y” when prompted.)
    bzip2 -d SUNWtvnc-{version}.pkg.bz2
    pkgadd -d SUNWtvnc-{version}.pkg
    
    Select the SUNWtvnc package (usually option 1) from the menu.

TurboVNC for Solaris installs into /opt/SUNWtvnc.

5.3 Installing TurboVNC on Windows (Client Only)

  1. Download the TurboVNC Windows installer package (TurboVNC-{version}.exe) from the files area of the VirtualGL SourceForge web-site.
  2. Run the TurboVNC installer. The installation of TurboVNC should be self-explanatory. The only configuration option is the directory into which you want the files to be installed.

5.4 Installing TurboVNC from Source

If you are using a non-RPM-based distribution of Linux or another platform for which there is not a pre-built TurboVNC binary package available, then log in as root, download the TurboVNC source tarball (turbovnc-{version}.tar.gz) from the files area of the VirtualGL SourceForge web-site, uncompress it, cd vnc/vnc_unixsrc, and read the contents of BUILDING.txt for further instructions on how to build and install TurboVNC from source.

5.5 Uninstalling TurboVNC

Linux

As root, issue the following command:

rpm -e turbovnc

Solaris

As root, issue the following command:

pkgrm SUNWtvnc

Answer “yes” when prompted.

Windows

Use the Add or Remove Programs applet in the Control Panel.


6 Configuring a Linux Machine as a VirtualGL Server

6.1 Granting Access to the Server’s X Display

VirtualGL requires access to the server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers. Unfortunately, accessing a 3D graphics card on Linux currently requires going through an X server. So the only way to share the server’s 3D graphics card among multiple users is to grant those users access to the X server that is running on the 3D graphics card.

It is important to understand the security risks associated with this. Once X display access is granted to a user, there is nothing that would prevent that user from logging keystrokes or reading back images from the X display. Using xauth, one can obtain “untrusted” X authentication keys which prevent such exploits, but unfortunately, those untrusted keys also disallow access to the 3D hardware. So it is necessary to grant full trusted X access to any users that will need to run VirtualGL. Unless you fully trust the users to whom you are granting this access, you should avoid logging in locally to the server’s X display as root unless absolutely necessary.

This section will explain how to configure a VirtualGL server such that select users can run VirtualGL, even if the server is sitting at the login prompt. The basic idea is to call a script (vglgenkey) from the display manager’s startup script. vglgenkey invokes xauth to generate an authorization key for the server’s X display, and it stores this key under /etc/opt/VirtualGL. The VirtualGL launcher script (vglrun) then attempts to read this key and merge it into the user’s .Xauthority file, thus granting the user access to the server’s X display. Therefore, you can control who has access to the server’s X display simply by controlling who has read access to the /etc/opt/VirtualGL directory.

If you prefer, you can also grant access to every authenticated user on the server by replacing the references to vglgenkey below with xhost +localhost.

  1. If the server machine is configured to boot into runlevel 5 (graphical login), then temporarily shut down its X server by issuing
    init 3
    
    as root
  2. Log in as root from the text console.
  3. Create a new group called vglusers and add any users that need to run VirtualGL to this group.
  4. Create a new directory /etc/opt/VirtualGL and make it readable by the vglusers group. For example:
    mkdir -p /etc/opt/VirtualGL
    chgrp vglusers /etc/opt/VirtualGL
    chmod 750 /etc/opt/VirtualGL
    
  5. If the server machine is configured to boot into runlevel 3 (text login), then configure it to boot into a graphical login by changing the first line of /etc/inittab from

    id:3:initdefault:

    to

    id:5:initdefault:
  6. Add
    vglgenkey
    
    at the top of the display manager’s startup script. The location of this script varies depending on the particular Linux distribution and display manager being used. The following table lists some common locations for this file:

    xdm or kdm gdm
    (default display manager on most Linux systems)
    RedHat 7/8/9
    Enterprise Linux 2.1/3
    /etc/X11/xdm/Xsetup_0

    (replace “0” with the display number of the X server you are configuring)
    /etc/X11/gdm/Init/Default

    (usually this is just symlinked to /etc/X11/xdm/Xsetup_0)
    Enterprise Linux 4
    Fedora 1-4
    /etc/X11/xdm/Xsetup_0

    (replace “0” with the display number of the X server you are configuring)
    /etc/X11/gdm/Init/:0

    (usually this is just symlinked to /etc/X11/xdm/Xsetup_0)
    Enterprise Linux 5
    Fedora 5 & 6
    /etc/X11/xdm/Xsetup_0

    (replace “0” with the display number of the X server you are configuring)
    /etc/gdm/Init/Default
    SuSE/United Linux /etc/X11/xdm/Xsetup /etc/opt/gnome/gdm/Init/Default
  7. If the server is running gdm (the factory default on most Linux systems), then you’ll need to set up gdm to allow TCP connections to the X server. To do this, edit the gdm.conf file and add the following line under the [security] section (or change it if it already exists):
    DisallowTCP=false
    
    See the table below for the location of gdm.conf on various systems.
  8. Unless you know that you absolutely need it, disable the XTEST extension. Disabling XTEST will not prevent a user from logging keystrokes or reading images from the X display, but it will prevent them from inserting key and mouse events and thus hijacking a local X session.

    Disabling XTEST is accomplished by passing an argument of -tst on the command line used to launch the X server. The location of this command line varies depending on the particular Linux distribution and display manager being used. The following table lists some common locations:

    xdm gdm
    (default on most Linux systems)
    kdm
    RedHat 7/8/9
    Enterprise Linux 2.1/3/4
    Fedora 1-4
    /etc/X11/xdm/Xservers /etc/X11/gdm/gdm.conf /etc/X11/xdm/Xservers
    Enterprise Linux 5
    Fedora 5 & 6
    /etc/X11/xdm/Xservers /etc/gdm/custom.conf /etc/X11/xdm/Xservers
    SuSE/United Linux /etc/X11/xdm/Xservers /etc/opt/gnome/gdm/gdm.conf /etc/opt/kde3/share/config/kdm/Xservers

    For xdm-style configuration files, add -tst to the line corresponding to the display number you are configuring. For example:
    :0 local /usr/X11R6/bin/X :0 vt07 -tst
    
    For gdm-style configuration files, add -tst to all lines that appear to be X server command lines. For example:
    StandardXServer=/usr/X11R6/bin/X -tst
    
    [server-Standard]
    command=/usr/X11R6/bin/X -tst -audit 0
    
    [server-Terminal]
    command=/usr/X11R6/bin/X -tst -audit 0 -terminate
    
    [server-Chooser]
    command=/usr/X11R6/bin/X -tst -audit 0
    
  9. Restart the X server by issuing
    init 5
    
    as root.
  10. To check your work, log out of the server, log back in via. SSh, and run
    xauth merge /etc/opt/VirtualGL/vgl_xauth_key
    xdpyinfo -display :0
    
    In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above.

6.2 Device permissions

If you are installing VirtualGL on a server which is running version 1.0-71xx or earlier of the NVidia accelerated GLX drivers, follow the instructions in /usr/share/doc/NVIDIA_GLX-1.0/README regarding setting the appropriate permissions for /dev/nvidia*. This is not necessary with more recent versions of the driver. cat /proc/driver/nvidia/version to determine which version of the NVidia driver is installed on your system.


7 Configuring a Solaris Machine as a VirtualGL Server

7.1 GLP: Using VirtualGL Without an X Server

Sun’s OpenGL library for Sparc systems has a special extension called “GLP” which allows VirtualGL to directly access a 3D graphics card even if there is no X server running on the card. Apart from greatly simplifying the process of configuring the VirtualGL server, GLP also greatly improves the overall security of the VirtualGL server, since it eliminates the need to grant X server access to VirtualGL users. In addition, GLP makes it easy to assign VirtualGL jobs to any graphics card in a multi-card system.

If your system is running Sun OpenGL 1.5 for Sparc/Solaris, it is recommended that you configure it to use GLP:

  1. Log in as root.
  2. Create a new group called vglusers and add any users that need to run VirtualGL to this group.
  3. If the /etc/dt/config directory does not exist, create it.
    mkdir -p /etc/dt/config
    
    Make sure that /etc/dt/config has global read/execute permissions.
  4. Create a file called GraphicsDevices under /etc/dt/config and add any framebuffer device paths in your system (/dev/fbs/kfb0, /dev/fbs/jfb0, etc.) to this file, one device per line. For example:
    touch /etc/dt/config/GraphicsDevices
    for i in /dev/fbs/*[0-9]; do echo $i >>/etc/dt/config/GraphicsDevices; done
    
    You can choose to include only certain framebuffer devices in this file. Only the devices listed in GraphicsDevices will be available for use by VirtualGL.
  5. Grant read access to this file for the vglusers group. For example:
    chgrp vglusers /etc/dt/config/GraphicsDevices
    chmod 640 /etc/dt/config/GraphicsDevices
    

If you wish to make GLP the default for all users of the system, you can add

VGL_DISPLAY=glp
export VGL_DISPLAY

to /etc/profile. This will cause VirtualGL to use the first device specified in /etc/dt/config/GraphicsDevices as the default rendering device. Users can override this default by setting VGL_DISPLAY in one of their startup scripts (such as ~/.profile or ~/.login) or by passing an argument of -d <device> to vglrun when invoking VirtualGL. See Chapter 19 for more details.

7.2 Granting Access to the Server’s X Display

If you plan to use VirtualGL only with GLP, then you can skip this section.

VirtualGL requires access to the server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers. Unfortunately, accessing a 3D graphics card on Solaris/x86 systems or on Solaris/Sparc systems without GLP requires going through an X server. On such systems, the only way to share the server’s 3D graphics card among multiple users is to grant those users access to the X server that is running on the 3D graphics card.

It is important to understand the security risks associated with this. Once X display access is granted to a user, there is nothing that would prevent that user from logging keystrokes or reading back images from the X display. Using xauth, one can obtain “untrusted” X authentication keys which prevent such exploits, but unfortunately, those untrusted keys also disallow access to the 3D hardware. So it is necessary to grant full trusted X access to any users that will need to run VirtualGL. Unless you fully trust the users to whom you are granting this access, you should avoid logging in locally to the server’s X display as root unless absolutely necessary.

This section will explain how to configure a VirtualGL server such that select users can run VirtualGL, even if the server is sitting at the login prompt. The basic idea is to call a script (vglgenkey) from the display manager’s startup script. vglgenkey invokes xauth to generate an authorization key for the server’s X display, and it stores this key under /etc/opt/VirtualGL. The VirtualGL launcher script (vglrun) then attempts to read this key and merge it into the user’s .Xauthority file, thus granting the user access to the server’s X display. Therefore, you can control who has access to the server’s X display simply by controlling who has read access to the /etc/opt/VirtualGL directory.

If you prefer, you can also grant access to every authenticated user on the server by replacing the references to vglgenkey below with xhost +localhost.

If your system is using dtlogin as a display manager:

  1. Log in as root.
  2. Create a new group called vglusers and add any users that need to run VirtualGL to this group.
  3. Create a new directory /etc/opt/VirtualGL and make it readable by the vglusers group. For example:
    mkdir -p /etc/opt/VirtualGL
    chgrp vglusers /etc/opt/VirtualGL
    chmod 750 /etc/opt/VirtualGL
    
  4. If the /etc/dt/config directory does not exist, create it.
    mkdir -p /etc/dt/config
    
  5. If /etc/dt/config/Xsetup does not exist, then copy the default Xsetup file from /usr/dt/config to that location:
    cp /usr/dt/config/Xsetup /etc/dt/config/Xsetup
    
  6. Edit /etc/dt/config/Xsetup, and add the following lines to the bottom of the file:
    /opt/SUNWvgl/bin/vglgenkey
    
  7. If /etc/dt/config/Xconfig does not exist, then copy the default Xconfig file from /usr/dt/config to that location:
    cp /usr/dt/config/Xconfig /etc/dt/config/Xconfig
    
  8. Edit /etc/dt/config/Xconfig, and add (or uncomment) the following line:
    Dtlogin*grabServer: False
    
    Explanation
    The Dtlogin*grabServer option restricts X display access to only the dtlogin process. This is an added security measure, since it prevents a user from attaching any kind of sniffer program to the X display even if they have display access. But Dtlogin*grabServer also prevents VirtualGL from using the X display to access the 3D graphics hardware, so this option must be disabled for VirtualGL to work properly.

    If the system you are configuring as a VirtualGL server is also being used as a Sun Ray server, then make these same modifications to /etc/dt/config/Xconfig.SUNWut.prototype. Otherwise, the modifications you just made to /etc/dt/config/Xconfig will be overwritten the next time the system is restarted.

  9. Unless you know that you absolutely need it, disable the XTEST extension. Disabling XTEST will not prevent a user from logging keystrokes or reading images from the X display, but it will prevent them from inserting key and mouse events and thus hijacking a local X session.
    1. If /etc/dt/config/Xservers does not exist, then copy the default Xservers file from /usr/dt/config to that location:
      cp /usr/dt/config/Xservers /etc/dt/config/Xservers
      
    2. Edit /etc/dt/config/Xservers and add an argument of -tst to the line corresponding to the display number you are configuring. For example:
      :0  Local local_uid@console root /usr/openwin/bin/Xsun :0 -nobanner -tst
      

      If the system you are configuring as a VirtualGL server is also being used as a Sun Ray server, then make these same modifications to /etc/dt/config/Xservers.SUNWut.prototype. Otherwise, the modifications you just made to /etc/dt/config/Xservers will be overwritten the next time the system is restarted.

  10. Verify that /etc/dt/config and /etc/dt/config/Xsetup can be executed by all users, and verify that /etc/dt/config/Xconfig and /etc/dt/config/Xservers can be read by all users.
  11. Restart the X server by issuing
    /etc/init.d/dtlogin stop; /etc/init.d/dtlogin start
    
  12. To check your work, log out of the server, log back in via. SSh, and run
    /usr/openwin/bin/xauth merge /etc/opt/VirtualGL/vgl_xauth_key
    /usr/openwin/bin/xdpyinfo -display :0
    
    In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above.

If your system is using gdm as a display manager (gdm is available only on Solaris 10 or later):

  1. Log in as root.
  2. Create a new group called vglusers and add any users that need to run VirtualGL to this group.
  3. Create a new directory /etc/opt/VirtualGL and make it readable by the vglusers group. For example:
    mkdir -p /etc/opt/VirtualGL
    chgrp vglusers /etc/opt/VirtualGL
    chmod 750 /etc/opt/VirtualGL
    
  4. Add
    /opt/SUNWvgl/bin/vglgenkey
    
    to the top of the /etc/X11/gdm/Init/Default file.
  5. Edit /etc/X11/gdm/gdm.conf and add the following line under the [security] section (or change it if it already exists):
    DisallowTCP=false
    
  6. Unless you know that you absolutely need it, disable the XTEST extension. Disabling XTEST will not prevent a user from logging keystrokes or reading images from the X display, but it will prevent them from inserting key and mouse events and thus hijacking a local X session.

    Edit /etc/X11/gdm/gdm.conf and add -tst to all lines that appear to be X server command lines. For example:
    StandardXServer=/usr/X11R6/bin/Xorg -tst
    
    [server-Standard]
    command=/usr/X11R6/bin/Xorg -tst -audit 0
    
    [server-Terminal]
    command=/usr/X11R6/bin/Xorg -tst -audit 0 -terminate
    
    [server-Chooser]
    command=/usr/X11R6/bin/Xorg -tst -audit 0
    
  7. Restart gdm by issuing
    svcadm disable gdm2-login; svcadm enable gdm2-login
    
  8. To check your work, log out of the server, log back in via. SSh, and run
    /usr/openwin/bin/xauth merge /etc/opt/VirtualGL/vgl_xauth_key
    /usr/openwin/bin/xdpyinfo -display :0
    
    In particular, make sure that XTEST doesn’t show up in the list of extensions if you disabled it above.

7.3 Device Permissions

Whether the server’s 3D graphics card is being accessed through GLP or through an X server, you must perform the following procedure to enable VirtualGL users to access the framebuffer device(s):

  1. Edit /etc/logindevperm and comment out the “frame buffers” line. For example:
    # /dev/console    0600    /dev/fbs/*              # frame buffers
    
  2. Change the permissions and group for /dev/fbs/* to allow write access to anyone who will need to use VirtualGL. For example:
    chmod 660 /dev/fbs/*
    chown root /dev/fbs/*
    chgrp vglusers /dev/fbs/*
    

Explanation: Normally, when someone logs into a Solaris machine, the system will automatically assign ownership of the framebuffer devices to that user and set the permissions for the framebuffer devices to those specified in /etc/logindevperm. The default setting in /etc/logindevperm disallows anyone from using the framebuffer devices except the user that is logged in. But in order to run VirtualGL, a user needs write access to the framebuffer devices. So in order to make the framebuffer a shared resource, it is necessary to disable the login device permissions mechanism for the framebuffer devices and manually set the owner and group for these devices such that any VirtualGL users can write to them.

Note that the framebuffer device permissions control not only remote execution of OpenGL applications but also local execution of OpenGL applications. If it is necessary for users outside of the vglusers group to run OpenGL applications on the VirtualGL server, then set the permissions on /dev/fbs/* to 666 rather than 660.

7.4 SSh Server Configuration

The server’s SSh daemon should have the X11Forwarding option enabled and the UseLogin option disabled. This is configured in sshd_config, the location of which varies depending on your distribution of SSh. Solaris 10 generally keeps this in /etc/ssh, whereas Blastwave keeps it in /opt/csw/etc and SunFreeware keeps it in /usr/local/etc.


8 Configuring a Windows Machine as a VirtualGL Direct Mode Client

  1. Install Hummingbird Exceed if it isn’t already installed.
  2. Add the Exceed path (example: C:\Program Files\Hummingbird\Connectivity\9.00\Exceed) to the system PATH environment if it isn’t already there.
  3. Install a Secure Shell (SSh) client. If Cygwin is already installed, then you can use the SSh client included in Cygwin. Otherwise, download and install SSHWindows or PuTTY. ssh.exe or putty.exe should be somewhere in your PATH.

8.1 Optimizing Exceed

Disabling Pixel Format Conversion and Backing Store

  1. Load Exceed XConfig (right-click on the Exceed taskbar icon, then select Tools–>Configuration.)
  2. Open the “X Server Protocol” applet in XConfig.

    If you are using the “Classic View” mode of XConfig, open the “Protocol” applet instead.

  3. In the “X Server Protocol” applet, select the “Protocol” tab and make sure that “Use 32 bits per pixel for true color” is not checked.

    exceed1
  4. Click “Validate and Apply Changes.” If XConfig asks whether you want to perform a server reset, click “Yes.”
  5. Open the “Other Server Settings” applet in XConfig.

    If you are using the “Classic View” mode of XConfig, open the “Performance” applet instead.

  6. Select the “Performance” tab and make sure that “Default Backing Store” is set to “None.”

    exceed3
  7. Click “Validate and Apply Changes.” If XConfig asks whether you want to perform a server reset, click “Yes.”

Enabling MIT-SHM

VirtualGL has the ability to take advantage of the MIT-SHM extension in Hummingbird Exceed to accelerate image drawing on Windows. This can improve the overall performance of the VirtualGL pipeline by as much as 20% in some cases.

The bad news is that this extension has some issues in earlier versions of Exceed. If you are using Exceed 8 or 9, you will need to obtain the following patches from the Hummingbird support site:

Product Patches Required How to Obtain
Hummingbird Exceed 8.0 hclshm.dll v9.0.0.1 (or higher)
xlib.dll v9.0.0.3 (or higher)
exceed.exe v8.0.0.28 (or higher)
Download all patches from the Hummingbird support site.
(Hummingbird WebSupport account required)
Hummingbird Exceed 9.0 hclshm.dll v9.0.0.1 (or higher)
xlib.dll v9.0.0.3 (or higher)
exceed.exe v9.0.0.9 (or higher)
exceed.exe can be patched by running Hummingbird Update.

All other patches must be downloaded from the Hummingbird support site.
(Hummingbird WebSupport account required)

No patches should be necessary for Exceed 10 and above.

Next, you need to enable the MIT-SHM extension in Exceed:

  1. Load Exceed XConfig (right-click on the Exceed taskbar icon, then select Tools–>Configuration.)
  2. Open the “X Server Protocol” applet in XConfig.

    If you are using the “Classic View” mode of XConfig, open the “Protocol” applet instead.

  3. Select the “Extensions” tab and make sure that “MIT-SHM” is checked.

    exceed2
  4. Click “Validate and Apply Changes.” If XConfig asks whether you want to perform a server reset, click “Yes.”

8.2 Installing the VirtualGL Client as a Windows Service

The VirtualGL Windows Client can be installed as a Windows service (and subsequently removed) using the links provided in the “VirtualGL Client” start menu group. Once installed, the service can be started from the Services applet in the Control Panel (located under “Administrative Tools”) or by invoking

net start vglclient

from a command prompt. The service can be subsequently stopped by invoking

net stop vglclient

If you wish to install the client as a service and have it listen on a port other than the default (4242 for unencrypted connections or 4243 for SSL connections), then you will need to install the service manually from the command line.

vglclient -?

gives a list of the relevant command-line options.


9 Using VirtualGL in Direct Mode

9.1 Direct Mode with X11 Forwarding

Performance

Optimal

Security Notes

X11 traffic is encrypted, but the VirtualGL image stream is left unencrypted to maximize performance.

Procedure

  1. Start the X server/Exceed if it isn’t started already.
  2. Start the VirtualGL Client program:
    Linux clients
    Open a terminal window and type
    vglclient
    
    Solaris clients
    Open a terminal window and type
    /opt/SUNWvgl/bin/vglclient
    
    Windows clients
    If the VirtualGL client has not been started as a service, then start it manually by selecting Start VirtualGL Client in the VirtualGL Client Start Menu group.
  3. Open a new Command Prompt/terminal window.
  4. Linux and Solaris clients
    In the new terminal window, type
    echo $DISPLAY
    
    and make a note of the value.
    Windows clients
    In the new Command Prompt window, type
    set DISPLAY localhost:{n}.0
    
    Replace {n} with the display number that Exceed is occupying. To obtain this, hover over the Exceed icon in the taskbar and make a note of the value it displays (usually :0.0, unless you have multiple Exceed sessions running.)
  5. In the same Command Prompt/terminal window, open a Secure Shell (SSh) session into the VirtualGL server by typing:
    ssh -X {user}@{server}
    
    Replace {user} with your user account name on the VirtualGL server and {server} with the hostname or IP address of that server.

    If using PuTTY, replace ssh with putty in the above example.

  6. If the X server on your client machine is using a display number of 0 (usually the case), then you can skip this step. Otherwise, set the VGL_CLIENT environment variable on the VirtualGL server to point to the client’s X display:
    export VGL_CLIENT={client}:{n}.0
    
    or
    setenv VGL_CLIENT {client}:{n}.0
    
    Replace {client} with the hostname or IP address of your client machine (echo $SSH_CLIENT if you don’t know this) and {n} with the display number of the client machine’s X display (obtained in Step 4.)
  7. In the SSh session, start the 3D application using VirtualGL:
    Linux server
    vglrun [vglrun options] {application_executable_or_script} {arguments}
    
    Solaris server
    /opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}
    
    Consult Chapter 19 for more information on vglrun command line options.

9.2 Direct Mode with a Direct X11 Connection

Performance

Optimal

Security Notes

Procedure

  1. Start the X server/Exceed if it isn’t started already.
  2. Start the VirtualGL Client program:
    Linux clients
    Open a terminal window and type
    vglclient
    
    Solaris clients
    Open a terminal window and type
    /opt/SUNWvgl/bin/vglclient
    
    Windows clients
    If the VirtualGL client has not been started as a service, then start it manually by selecting Start VirtualGL Client in the VirtualGL Client Start Menu group.
  3. Open a new Command Prompt/terminal window.
  4. Linux and Solaris clients
    In the new terminal window, type
    echo $DISPLAY
    
    and make a note of the value.
    Windows clients
    Hover over the Exceed icon in the taskbar, and make a note of the display number that Exceed is occupying (usually :0.0, unless you have multiple Exceed sessions running.)
  5. Linux and Solaris clients
    In the same terminal window, type
    xhost +{server}
    
    Replace {server} with the hostname or IP address of the VirtualGL server.
    Windows clients
    Configure Exceed to grant display access to any VirtualGL servers that you plan to use.
    1. Load Exceed XConfig (right-click on the Exceed taskbar icon, then select Tools–>Configuration.)
    2. In XConfig, open the “Security Access Control and System Administration” applet (if using Category View) or the “Security” applet (if using Classic View.)
    3. Select “File” under “Host Access Control List”, then click the “Edit” button. This opens xhost.txt in Notepad. exceed5
    4. Add the hostnames or IP addresses of any VirtualGL servers you plan to use to this file (one per line), save the file, and exit Notepad.
    5. Back in XConfig, click “Validate and Apply Changes.” If prompted to reset the X server, click “Yes.”
  6. In the Command Prompt/terminal window, open a Secure Shell (SSh) session into the VirtualGL server by typing:
    ssh {user}@{server}
    
    Replace {user} with your user account name on the VirtualGL server and {server} with the hostname or IP address of that server.

    If using PuTTY, replace ssh with putty in the above example.

  7. If the X server on your client machine is using a display number of 0 (usually the case), then you can skip this step. Otherwise, set the DISPLAY environment variable on the VirtualGL server to point to the client’s X display:
    export DISPLAY={client}:{n}.0
    
    or
    setenv DISPLAY {client}:{n}.0
    
    Replace {client} with the hostname or IP address of your client machine (echo $SSH_CLIENT if you don’t know this) and {n} with the display number of the client machine’s X display (obtained in Step 4.)
  8. In the SSh session, start the 3D application using VirtualGL:
    Linux server
    vglrun [vglrun options] {application_executable_or_script} {arguments}
    
    Solaris server
    /opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}
    
    Consult Chapter 19 for more information on vglrun command line options.

9.3 Direct Mode with SSL Encryption

Performance

On high-speed networks such as Ethernet, VirtualGL’s performance is reduced by as much as 20% by enabling SSL encryption.

Security Notes

Procedure

Pass an argument of +s to vglrun when launching VirtualGL, or set the environment variable VGL_SSL to 1 on the VirtualGL server. (see Chapter 19 for more details.)

9.4 Direct Mode with X11 Forwarding and SSh Tunneling

Performance

Security Notes

Procedure

The procedure is the same as for the X11 Forwarding case, except that the following additional steps need to be taken:

  1. Once connected to the VirtualGL server via. SSh, execute the following command:
    Linux server
    /opt/VirtualGL/bin/nettest -findport
    
    Solaris server
    /opt/SUNWvgl/bin/nettest -findport
    
    This program will allocate a free TCP port number and print the number to the console. Make a note of it.
  2. Close the SSh session and re-open it using the following command line:
    ssh -X -R {port}:localhost:4242 {user}@{server}
    
    Replace {port} with the port number you obtained in Step 1.

    If you are using an OpenSSH client, you can also type the following key sequence: <ENTER> ~ C (that’s the Enter key, followed by a tilde, followed by a capital C), which will bring up an ssh> prompt at which you can enter -R {port}:localhost:4242. This allows you to set up the tunnel without closing and re-opening the SSh session.

  3. Once connected to the server for the second time, set the VGL_PORT environment variable to match the port number you obtained above.
  4. Set the VGL_CLIENT environment variable on the VirtualGL server to localhost:{n}.0, where {n} is the display number of the X server running on the client machine.

    Explanation: When you established the SSh connection using the -R argument, it created a listener on the VirtualGL server. That listener will accept a connection from VirtualGL and forward the connection over the SSh tunnel to port 4242 on the client machine. Thus, you need to set VGL_PORT and VGL_CLIENT on the VirtualGL server to tell VirtualGL to make a connection to the SSh listener rather than the “real” VirtualGL client program.


10 Using VirtualGL in Raw Mode with TurboVNC

Referring to Chapter 2, Raw Mode is a mode in which VirtualGL bypasses its internal image compressor and instead sends the rendered 3D images to an X server as uncompressed bitmaps. Raw Mode is designed to be used with an “X Proxy”, which is a virtual X server that intercepts X-Windows commands from an application, renders them into images, compresses the images, and sends them over the network to a client.

Thus, in Raw Mode, VirtualGL relies on the X proxy to compress the rendered 3D images, and since VirtualGL is sending those images to the X proxy at a very fast rate, the proxy must be able to compress the images very quickly in order to keep up. But, unfortunately, most X proxies can’t. They simply aren’t designed for the types of full-screen video workloads that VirtualGL generates. Therefore, the VirtualGL Project provides an optimized X proxy known as TurboVNC, which is based on the Virtual Network Computing (VNC) standard (more specifically, on the TightVNC variant thereof.)

On the surface, TurboVNC behaves very similarly to its parent project, but TurboVNC has been tuned to provide interactive performance for the types of full-screen video workloads that VirtualGL produces. On these types of image workloads, TurboVNC performs as much as an order of magnitude faster than TightVNC, uses more than an order of magnitude less CPU time to compress each frame, and it produces comparable compression ratios. Part of this speedup comes from the use of TurboJPEG, the same high-speed vector-optimized JPEG codec used by VirtualGL. Another large part of the speedup comes from bypassing the color compression features of TightVNC. TightVNC performs very CPU-intensive analysis on each image tile to determine whether the tile will compress better using color compression or JPEG. But for the types of images that a 3D application generates, it is almost never the case that color compression compresses better than JPEG, so TurboVNC bypasses this analysis to improve performance. TurboVNC also has the ability to hide network latency by decompressing and drawing a frame on the client while the next frame is being fetched from the server, thus improving performance dramatically on high-latency connections. TurboVNC additionally provides client-side double buffering, full support for Solaris, and other tweaks.

There are several reasons why one might prefer to use Raw Mode + TurboVNC over Direct Mode (and several reasons why one might not.)

Advantages of Raw Mode + TurboVNC

Advantages of Direct Mode

10.1 Using Raw Mode When TurboVNC and VirtualGL Are Running on the Same Machine

rawmodetoxproxy
  1. Open a new Command Prompt/terminal window on your client machine.
  2. In the new Command Prompt/terminal window, open a Secure Shell (SSh) session into the VirtualGL/TurboVNC server machine by typing:
    ssh {user}@{server}
    
    Replace {user} with your user account name on the VirtualGL server and {server} with the hostname or IP address of that server.

    If using PuTTY, replace ssh with putty in the above example.

  3. In the SSh session, start a TurboVNC server session:
    Linux server
    /opt/TurboVNC/bin/vncserver
    
    Solaris server
    /opt/SUNWvgl/bin/vncserver
    
  4. Make a note of the X display number that the TurboVNC server process prints out, for instance:

    New 'X' desktop is my_server:1

    If this is the first time that a TurboVNC server session has ever been run under this user account, TurboVNC will prompt for a VNC session password.
  5. The SSh session can now be exited, if desired.
  6. On the client machine, start the TurboVNC Viewer.
    Linux clients
    Open a terminal and type
    /opt/TurboVNC/bin/vncviewer
    
    Solaris clients
    Open a terminal and type
    /opt/SUNWvgl/bin/vncviewer
    
    Windows clients
    Select TurboVNC Viewer in the TurboVNC Start Menu group.
  7. A small dialog box will appear.

    Windows TurboVNC viewer Linux/Solaris TurboVNC viewer
    turbovnc1 turbovnc2

    Enter the X display name (hostname/IP address and display number) of the TurboVNC server in the “VNC Server” field, then click “Connect” (Windows) or press Enter (Linux/Solaris.)
  8. Another dialog box appears, prompting for the VNC session password.

    Windows TurboVNC viewer Linux/Solaris TurboVNC viewer
    turbovnc3 turbovnc4

    Enter the TurboVNC session password and click “OK” (Windows) or press Enter (Linux/Solaris.)

    A TurboVNC desktop window should appear on your client machine. This window contains a virtual X server with which you can interact to launch X-Windows applications on the TurboVNC server machine.
  9. Open a new terminal inside the TurboVNC desktop.
  10. In the terminal, start the 3D application using VirtualGL:
    Linux server
    vglrun [vglrun options] {application_executable_or_script} {arguments}
    
    Solaris server
    /opt/SUNWvgl/bin/vglrun [vglrun options] {application_executable_or_script} {arguments}
    
    Consult Chapter 19 for more information on vglrun command line options.

10.2 Using Raw Mode When TurboVNC and VirtualGL Are Running on Different Machines

rawmodeoverservernetwork

If TurboVNC and VirtualGL are running on different servers, then it is desirable to use Raw Mode to send images from the VirtualGL server to the TurboVNC server. Otherwise, the images would have to be compressed by the VirtualGL server, decompressed by the VirtualGL client, then recompressed by the TurboVNC server, which is a waste of CPU resources. However, sending images uncompressed over a network requires a fast network (generally, Gigabit Ethernet or faster.) So there needs to be a fast link between the VirtualGL server and the TurboVNC server for this procedure to perform well.

The procedure for using Raw Mode to transmit images from a VirtualGL server to a TurboVNC server is essentially the same as the procedure for using Direct Mode with a Direct X11 Connection – with the following notable differences:

  1. The “client” in this case is really the TurboVNC server machine.
  2. The “X server” is really the TurboVNC server session.
  3. It is not necessary to start the VirtualGL client.
  4. Once connected to the VirtualGL server via. SSh, it is necessary to either set the environment variable VGL_COMPRESS to 0 or pass an argument of -c 0 to vglrun when launching VirtualGL. Otherwise, VirtualGL will detect that the connection to the X server is remote, and it will automatically try to enable Direct Mode. Setting VGL_COMPRESS to 0 forces the use of Raw Mode, regardless of whether the X server is local or remote.

10.3 Disconnecting and Killing the TurboVNC Session

Closing the TurboVNC viewer disconnects from the TurboVNC server session, but the TurboVNC server session (and any applications that you may have started in it) is still running on the server machine, and you can reconnect to it at any time.

To kill a TurboVNC server session:

  1. Log in to the TurboVNC server using SSh.
  2. Type the following command:
    Linux server
    /opt/TurboVNC/bin/vncserver -kill :{n}
    
    Solaris
    /opt/SUNWtvnc/bin/vncserver -kill :{n}
    
    Replace {n} with the X display number of the TurboVNC server session you wish to kill.

To list the X display numbers and process ID’s of all TurboVNC server sessions that are currently running under your user account on this machine, run

Linux server
/opt/TurboVNC/bin/vncserver -list
Solaris server
/opt/SUNWtvnc/bin/vncserver -list

10.4 Using TurboVNC in a Web Browser

When a TurboVNC server session is created, it automatically launches a miniature web server that serves up a Java TurboVNC viewer applet. This Java TurboVNC viewer can be used to connect to the TurboVNC server from a machine that does not have a native TurboVNC viewer installed (or a machine for which no native TurboVNC viewer is available.) The Java viewer is significantly slower than the native viewer on high-speed networks, but on low-speed networks the Java viewer and native viewers have comparable performance. The Java viewer does not currently support double buffering.

To use the Java TurboVNC viewer, point your web browser to:

http://turbovnc_server:{5800+n}

where {turbovnc_server} is the hostname or IP address of the TurboVNC server machine, and n is the X display number of the TurboVNC server session to which you want to connect.

Example: If the TurboVNC server is running on X display my_server:1, then point your web browser to:

http://my_server:5801

10.5 Connection Profiles: Optimizing TurboVNC’s Performance for Different Network Types

To get the peak performance out of TurboVNC, you must give it a hint about the type of network that separates your client machine from the TurboVNC server. To do this, select a Connection Profile when launching the TurboVNC viewer.

In the Windows TurboVNC viewer, there are three buttons in the TurboVNC Connection dialog box that allow you to easily select the connection profile. In the Java viewer, the same thing is accomplished by clicking the “Options” button at the top of the browser window. With the Linux/Solaris TurboVNC viewer, you can either use command line options to set the connection profile prior to connecting, or you can press the F8 key after connecting to pop up a menu from which you can select the connection profile.

Linux/Solaris TurboVNC viewer Windows & Java TurboVNC viewers
High-bandwidth, low-latency network No action necessary Select “High-Speed Network” Connection Profile.
Low-bandwidth, high-latency network (favor performance over image quality) Pass argument of -broadband to vncviewer or select “Preset: Broadband (favor performance)” from the F8 popup menu Select “Broadband (favor performance)” Connection Profile.
Low-bandwidth, high-latency network (favor image quality over performance) Pass argument of -wan to vncviewer or select “Preset: Broadband (favor image quality)” from the F8 popup menu Select “Broadband (favor image quality)” Connection Profile.

The “High-Speed Network” and “Broadband (favor image quality)” connection profiles set the JPEG compression quality to a high enough level that the compression loss is not perceivable by the human eye. The “Broadband (favor performance)” connection profile sets the image quality to a very low (but still usable) level which will achieve interactive performance on typical broadband connections.

10.6 Securing a TurboVNC Connection

Normally, the connection between the TurboVNC server and the TurboVNC viewer is completely unencrypted, but securing that connection can be easily accomplished by using the port forwarding feature of Secure Shell (SSh.) After you have started a TurboVNC server session on the server machine, open a new SSh connection into the server machine using the following command line:

ssh -L {5900+n}:localhost:{5900+n} {user}@{server}

If using PuTTY, replace ssh with putty in the above example.

Replace {user} with your user account name on the TurboVNC server and {server} with the hostname or IP address of that server. Replace n with the X display number of the TurboVNC server session to which you want to connect.

For instance, if you wish to connect to display :1 on server my_server using user account my_user, you would type

ssh -L 5901:localhost:5901 my_user@my_server

After the SSh connection has been established, you can then launch the TurboVNC viewer and point it to localhost:{n} (localhost:1 in the above example.)

Performance Notes

For LAN connections and other high-speed networks, tunneling the TurboVNC connection over SSh will reduce performance by as much as 20% (50% if using PuTTY.) But for wide-area networks, broadband, etc., there is no performance penalty for using SSh tunneling with TurboVNC.

10.7 Further Reading

For more detailed instructions on the usage of TurboVNC:

Linux
Refer to the TurboVNC man pages:
man -M /opt/TurboVNC/man {vncserver | Xvnc | vncviewer | vncconnect | vncpasswd}
Solaris
Refer to the TurboVNC man pages:
man -M /opt/SUNWtvnc/man {vncserver | Xvnc | vncviewer | vncconnect | vncpasswd}
Windows
Use the embedded help feature (the question mark button in the upper right of the TurboVNC Viewer window.)

The TightVNC documentation:

http://www.tightvnc.com/docs.html

might also be helpful, since TurboVNC is based on TightVNC and shares many of its features.


11 Using VirtualGL in Raw Mode with Other X Servers and Proxies

Other X Proxies

The previous chapter described how to use VirtualGL in Raw Mode with TurboVNC, but much of this information is also applicable to other X proxies, such as RealVNC, NX, etc. Generally, none of these other solutions will provide anywhere near the performance of TurboVNC, but some of them have capabilities that TurboVNC lacks (NX, for instance, can do seamless windows.)

VirtualGL reads the value of the DISPLAY environment variable to determine whether to enable Raw Mode by default. If DISPLAY begins with a colon (“:”) or with “unix:”, then VirtualGL will enable Raw Mode as the default. This should effectively make Raw Mode the default for most X proxies, but if for some reason it doesn’t, then you can force the use of Raw Mode by setting VGL_COMPRESS to 0 or passing an argument of -c 0 to vglrun.

Raw Mode Over a Network

rawmodeovernetwork

The previous chapter described how to use Raw Mode over a server network to send uncompressed pixels from a VirtualGL server to a TurboVNC server. But Raw Mode can also be used to send uncompressed pixels to a client machine. There are two main reasons why you might want to do this:

The procedure for using Raw Mode over a network is the same as the procedure for using Direct Mode with a Direct X11 Connection – with the following notable differences:

  1. It is not necessary to install or run the VirtualGL client.
  2. Once connected to the VirtualGL server via. SSh, it is necessary to either set the environment variable VGL_COMPRESS to 0 or pass an argument of -c 0 to vglrun when launching VirtualGL. Otherwise, VirtualGL will detect that the connection to the X server is remote, and it will automatically try to enable Direct Mode. Setting VGL_COMPRESS to 0 forces the use of Raw Mode, regardless of whether the X server is local or remote.

WORD OF CAUTION

Do not use SSh X11 tunneling with Raw Mode, as this will reduce the performance by 80% or more. It is necessary to use a direct X11 connection to sustain an interactive frame rate with Raw Mode on Gigabit networks.


12 vglrun and Solaris Shell Scripts

vglrun can be used to launch either binary executables or shell scripts, but there are a few things to keep in mind when using vglrun to launch a shell script on Solaris. When you vglrun a shell script, the VirtualGL faker library will be preloaded into every executable that the script launches. Normally this is innocuous, but if the script calls any executables that are setuid root, then Solaris will refuse to load those executables because you are attempting to preload a library (VirtualGL) that is not in a “secure path.” Solaris keeps a tight lid on what goes into /usr/lib and /lib, and by default, it will only allow libraries in those paths to be preloaded into an executable that is setuid root. Generally, 3rd party packages are verboden from installing anything into /usr/lib or /lib. But you can use the crle utility to add other directories to the operating system’s list of secure paths. In the case of VirtualGL, you would execute the following commands (as root):

crle -u -s /opt/SUNWvgl/lib
crle -64 -u -s /opt/SUNWvgl/lib/64

But please be aware of the security ramifications of this before you do it. You are essentially telling Solaris that you trust the security and stability of the VirtualGL code as much as you trust the security and stability of the operating system. And while we’re flattered, we’re not sure that we’re necessarily deserving of that accolade, so if you are in a security critical environment, apply the appropriate level of paranoia here.

An easier, and perhaps more secure, approach is to simply edit the application script and make it store the value of the LD_PRELOAD environment variables until right before the actual executable is run. For instance, take the following application script (please):

Contents of application.sh:

#!/bin/sh
some_setuid_binary
some_application_binary

You would modify the script as follows:

Contents of application.sh:

#!/bin/sh
LD_PRELOAD_32_SAVE=$LD_PRELOAD_32
LD_PRELOAD_64_SAVE=$LD_PRELOAD_64
LD_PRELOAD_32=
LD_PRELOAD_64=
export LD_PRELOAD_32 LD_PRELOAD_64

some_setuid_binary

LD_PRELOAD_32=$LD_PRELOAD_32_SAVE
LD_PRELOAD_64=$LD_PRELOAD_64_SAVE
export LD_PRELOAD_32 LD_PRELOAD_64

some_application_binary

vglrun on Solaris has two options that are relevant to launching scripts:

vglrun -32 {script}

will preload VirtualGL only into 32-bit executables called by a script, whereas

vglrun -64 {script}

will preload VirtualGL only into 64-bit executables. So if, for instance, the setuid binary that the script is calling is a 32-bit executable and the application is a 64-bit executable, then you could use vglrun -64 to launch the application script.


13 Using VirtualGL with Applications That Manually Load OpenGL

The lion’s share of OpenGL applications are dynamically linked against libGL.so, and thus libGL.so is automatically loaded whenever the application loads. Whenever vglrun is used to launch such applications, VirtualGL is loaded ahead of libGL.so, meaning that OpenGL and GLX symbols are resolved from VirtualGL first and the “real” OpenGL library second.

However, some applications (particularly games) are not dynamically linked against libGL.so. These applications typically call dlopen() and dlsym() later on in the program’s execution to manually load OpenGL and GLX symbols from libGL.so. Such applications also generally provide a mechanism (usually either an environment variable or a command line argument) which allows the user to specify a library that can be loaded instead of libGL.so.

So let’s assume that you just downloaded the latest version of the Linux game Foo Wars from the Internet, and (for whatever reason) you want to run the game in a VNC session. The game provides a command line switch -g which can be used to specify an OpenGL library to load other than libGL.so. You would launch the game using a command line such as this:

vglrun foowars -g /usr/lib/librrfaker.so

You still need to use vglrun to launch the game, because VirtualGL must also intercept a handful of X11 calls. Using vglrun allows VGL to intercept these calls, whereas using the game’s built-in mechanism for loading a substitute OpenGL library allows VirtualGL to intercept the GLX and OpenGL calls.

In some cases, the application doesn’t provide an override mechanism such as the above. In these cases, you should pass an argument of -dl to vglrun when starting the application. For example:

vglrun -dl foowars

Passing -dl to vglrun forces another library to be loaded ahead of VirtualGL and libGL.so. This new library intercepts any calls to dlopen() and forces the application to open VirtualGL instead of libGL.so.

Chapter 15 contains specific recipes for getting a variety of games and other applications to work with VirtualGL.


14 Using VirtualGL with Chromium and ModViz VGP

Chromium is a powerful framework for performing various types of parallel OpenGL rendering. It is usually used on clusters of commodity Linux PC’s to divide up the task of rendering scenes with large geometries or large pixel counts (such as when driving a display wall.) Chromium is most often used in one of three configurations:

  1. Sort-First Rendering (Image-Space Decomposition)
  2. Sort-First Rendering (Image-Space Decomposition) with Readback
  3. Sort-Last Rendering (Object-Space Decomposition)

14.1 Configuration 1: Sort-First Rendering (Image-Space Decomposition)

chromium-displaywall

Sort-First Rendering (Image-Space Decomposition) is used to overcome the fill-rate limitations of individual graphics cards. When configured to use sort-first rendering, Chromium divides up the scene based on which polygons will be visible in a particular section of the final image. It then instructs each node of the cluster to render only the polygons that are necessary to generate the image section (“tile”) for that node. This is primarily used to drive high-resolution displays that would be impractical to drive from a single graphics card due to limitations in the card’s framebuffer memory, processing power, or both. Configuration 1 could be used, for instance, to drive a CAVE, video wall, or even an extremely high-resolution monitor. In this configuration, each Chromium node generally uses all of its screen real estate to render a section of the multi-screen image.

VirtualGL is generally not very useful with Configuration 1. You could theoretically install a separate copy of VirtualGL on each display node and use it to redirect the output of each crserver instance to a multi-screen X server running elsewhere on the network. But there would be no way to synchronize the screens on the remote end. Chromium uses DMX to synchronize the screens in a multi-screen configuration, and VirtualGL would have to be made DMX-aware for it to perform the same job. Maybe at some point in the future … If you have a need for such a configuration, let us know.

14.2 Configuration 2: Sort-First Rendering (Image-Space Decomposition) with Readback

chromium-sortfirst

Configuration 2 uses the same sort-first principle as Configuration 1, except that each tile is only a fraction of a single screen, and the tiles are recombined into a single window on Node 0. This configuration is perhaps the least often used of the three, but it is useful in cases where the scene contains a large amount of textures (such as in volume rendering) and thus rendering the whole scene on a single node would be prohibitively slow due to fill-rate limitations.

In this configuration, the application is allowed to choose a visual, create an X window, and manage the window as it would normally do. But all other OpenGL and GLX activity is intercepted by the Chromium App Faker (CrAppFaker) so that the rendering task can be split up among the rendering nodes. Once each node has rendered its section of the final image, the tiles get passed back to a Chromium Server (CrServer) process running on Node 0. This CrServer process attaches to the previously-created application window and draws the pixels into it using glDrawPixels().

The general strategy for making this work with VirtualGL is to first make it work without VirtualGL and then insert VirtualGL only into the processes that run on Node 0. VirtualGL must be inserted into the CrAppFaker process to prevent CrAppFaker from sending glXChooseVisual() calls to the X server (which would fail if the X server is a VNC server or otherwise does not provide GLX.) VirtualGL must be inserted into the CrServer process on Node 0 to prevent it from sending glDrawPixels() calls to the X server (which would effectively send uncompressed images over the network.) Instead, VirtualGL forces CrServer to draw into a Pbuffer, and VGL takes charge of transmitting those pixels to the destination X server in the most efficient way possible.

Since Chromium uses dlopen() to load the system’s OpenGL library, preloading VirtualGL into the CrAppFaker and CrServer processes using vglrun is not sufficient. Fortunately, Chromium provides an environment variable, CR_SYSTEM_GL_PATH, which allows one to specify an alternate path in which it will search for the system’s libGL.so. The VirtualGL packages for Linux and Solaris include a symbolic link named libGL.so which really points to the VirtualGL faker library (librrfaker.so) instead. This symbolic link is located in its own isolated directory, so that directory can be passed to Chromium in the CR_SYSTEM_GL_PATH environment variable, thus causing Chromium to load VirtualGL rather than the “real” OpenGL library. Refer to the following table:

32-bit Applications 64-bit Applications
Linux /opt/VirtualGL/lib /opt/VirtualGL/lib64
Solaris /opt/SUNWvgl/fakelib /opt/SUNWvgl/fakelib/64
CR_SYSTEM_GL_PATH setting required to use VirtualGL with Chromium

Running the CrServer in VirtualGL is simply a matter of setting this environment variable and then invoking crserver with vglrun. For example:

export CR_SYSTEM_GL_PATH=/opt/VirtualGL/lib
vglrun crserver

In the case of CrAppFaker, it is also necessary to set VGL_GLLIB to the location of the “real” OpenGL library (example: /usr/lib/libGL.so.1.) CrAppFaker creates its own fake version of libGL.so which is really just a copy of Chromium’s libcrfaker.so. So VirtualGL, if left to its own devices, will unwittingly try to load libcrfaker.so instead of the “real” OpenGL library. Chromium’s libcrfaker.so will in turn try to load VirtualGL again, and an endless loop will occur.

So what we want to do is something like this:

export CR_SYSTEM_GL_PATH=/opt/VirtualGL/lib
export VGL_GLLIB=/usr/lib/libGL.so.1
crappfaker

CrAppFaker will copy the application to a temp directory and then copy libcrfaker.so to that same directory, renaming it as libGL.so. So when the application is started, it loads libcrfaker.so instead of libGL.so. libcrfaker.so will then load VirtualGL instead of the “real” libGL, because we’ve overridden CR_SYSTEM_GL_PATH to make Chromium find VirtualGL’s fake libGL.so first. VirtualGL will then use the library specified in VGL_GLLIB to make any “real” OpenGL calls that it needs to make.

Note that crappfaker should not be invoked with vglrun.

So, putting this all together, here is an example of how you might start a sort-first rendering job using Chromium and VirtualGL:

  1. Start the mothership on Node 0 with an appropriate configuration for performing sort-first rendering with readback
  2. Start crserver on each of the rendering nodes
  3. On Node 0, set CR_SYSTEM_GL_PATH to the appropriate value for the operating system and application type (see table above)
  4. On Node 0, vglrun crserver &
  5. On Node 0, set VGL_GLLIB to the location of the “real” libGL (example: /usr/lib/libGL.so.1 or /usr/lib64/libGL.so.1.)
  6. On Node 0, launch crappfaker (do not use vglrun here)

Again, it’s always a good idea to make sure this works without VirtualGL before adding VirtualGL into the mix.

When using VirtualGL with this mode, resizing the application window may not work properly. This is because the resize event is sent to the application process, and therefore the CrServer process that’s actually drawing the pixels has no way of knowing that a window resize has occurred. A possible fix is to modify Chromium such that it propagates the resize event down the render chain so that all of the CrServer processes are aware that a resize event occurred.

14.3 Configuration 3: Sort-Last Rendering (Object-Space Decomposition)

chromium-sortlast

Sort-Last Rendering is used when the scene contains a huge number of polygons and the rendering bottleneck is processing all of that geometry on a single graphics card. In this case, each node runs a separate copy of the application, and for best results, the application needs to be at least partly aware that it’s running in a parallel environment so that it can give Chromium hints as to how to distribute the various objects to be rendered. Each node generates an image of a particular portion of the object space, and these images must be composited in such a way that the front-to-back ordering of pixels is maintained. This is generally done by collecting Z buffer data from each node to determine whether a particular pixel on a particular node is visible in the final image. The rendered images from each node are often composited using a “binary swap”, whereby the nodes combine their images in a cascading tree so that the overall compositing time is proportional to log2(N) rather than N.

To make this configuration work with VirtualGL:

  1. Start the mothership on Node 0 with an appropriate configuration for performing sort-last rendering
  2. Start crappfaker on each of the rendering nodes
  3. On Node 0, set CR_SYSTEM_GL_PATH to the appropriate value for the operating system and application type (see table in Section 14.2.)
  4. On Node 0, vglrun crserver

CRUT

The Chromium Utility Toolkit provides a convenient way for graphics applications to specifically take advantage of Chromium’s sort-last rendering capabilities. Such applications can use CRUT to explicitly specify how their object space should be decomposed. CRUT applications require an additional piece of software, crutserver, to be running on Node 0. So to make such applications work with VirtualGL:

  1. Start the mothership on Node 0 with an appropriate configuration for performing sort-last rendering
  2. Start crappfaker on each of the rendering nodes
  3. On Node 0, set CR_SYSTEM_GL_PATH to the appropriate value for the operating system and application type (see table in Section 14.2.)
  4. On Node 0, vglrun crutserver &
  5. On Node 0, vglrun crserver

14.4 A Note About Performance

Chromium’s use of X11 is generally not very optimal. It assumes a very fast connection between the X server and the Chromium Server. In certain modes, Chromium polls the X server on every frame to determine whether windows have been resized, etc. Thus, we have observed that, even on a fast network, Chromium tends to perform much better with VirtualGL running in a TurboVNC session as opposed to VirtualGL running in Direct Mode.

14.5 ModViz VGP and VirtualGL

ModViz Virtual Graphics PlatformTM is a polished commercial clustered rendering framework for Linux which supports all three of the rendering modes described above and provides a much more straightforward interface to configure and run these types of parallel rendering jobs.

All VGP jobs, regardless of configuration, are all spawned through vglauncher, a front-end program which automatically takes care of starting the appropriate processes on the rendering nodes, intercepting OpenGL calls from the application instance(s), sending rendered images back to Node 0, and compositing the images as appropriate. In a similar manner to VirtualGL’s vglrun, VGP’s vglauncher preloads a library (libVGP.so) in place of libGL.so, and this library intercepts the OpenGL calls from the application.

So our strategy here is similar to our strategy for loading the Chromium App Faker. We want to insert VirtualGL between VGP and the real system OpenGL library, so that VGP will call VirtualGL and VirtualGL will call libGL.so. Achieving this with VGP is relatively simple:

export VGP_BACKING_GL_LIB=librrfaker.so
vglrun vglauncher --preload=librrfaker.so:/usr/lib/libGL.so {application}

Replace /usr/lib/libGL.so with the full path of your system’s OpenGL library (/usr/lib64/libGL.so if you are launching a 64-bit application.)


15 Other Application Recipes

Application Platform Recipe Notes
ANSA v12.1.0 Linux/x86 Add

LD_PRELOAD_SAVE=$LD_PRELOAD
export LD_PRELOAD=

to the top of the ansa.sh script, then add

export LD_PRELOAD=$LD_PRELOAD_SAVE

just prior to the ${ANSA_EXEC_DIR}bin/ansa_linux${ext2} line.
The ANSA startup script directly invokes /lib/libc.so.6 to query the glibc version. Since the VirtualGL faker depends on libc, preloading VirtualGL when directly invoking libc.so.6 creates an infinite loop. So it is necessary to disable the preloading of VirtualGL in the application script and then re-enable it prior to launching the actual application.
Army Ops Linux/x86 vglrun -dl armyops See Chapter 13 for more details
Descent 3 Linux/x86 vglrun descent3 -g /usr/lib/librrfaker.so

or

vglrun -dl descent3
See Chapter 13 for more details
Doom 3 Linux/x86 vglrun doom3 +set r_glDriver /usr/lib/librrfaker.so

or

vglrun -dl doom3
See Chapter 13 for more details
Enemy Territory (Return to Castle Wolfenstein) Linux/x86 vglrun et +set r_glDriver /usr/lib/librrfaker.so

or

vglrun -dl et
See Chapter 13 for more details
Heretic II Linux/x86 vglrun heretic2 +set gl_driver /usr/lib/librrfaker.so +set vid_ref glx

or

vglrun -dl heretic2 +set vid_ref glx
See Chapter 13 for more details
Heavy Gear II Linux/x86 vglrun hg2 -o /usr/lib/librrfaker.so

or

vglrun -dl hg2
See Chapter 13 for more details
I-deas Master Series 9, 10, & 11 Solaris/Sparc When running I-deas with VirtualGL on a Solaris/Sparc server, remotely displaying to a non-Sparc client machine or to an X proxy such as VNC, it may be necessary to set the SDRC_SUN_IGNORE_GAMMA environment variable to 1. I-deas normally aborts if it detects that the X visual assigned to it is not gamma-corrected. But gamma-corrected X visuals only exist on Solaris/Sparc X servers, so if you are displaying the application to another type of X server or X proxy which doesn’t provide gamma-corrected X visuals, then it is necessary to override the gamma detection mechanism in I-deas.
Java2D applications that use OpenGL Linux, Solaris Java2D will use OpenGL to perform its rendering if sun.java2d.opengl is set to True. For example:

java -Dsun.java2d.opengl=True MyAppClass

In order for this to work in VirtualGL, it is necessary to invoke vglrun with the -dl switch. For example:

vglrun -dl java -Dsun.java2d.opengl=True MyAppClass

If you are using Java v6 b92 or later, you can also set the environment variable J2D_ALT_LIBGL_PATH to the path of librrfaker.so. For example:

setenv J2D_ALT_LIBGL_PATH /opt/SUNWvgl/lib/librrfaker.so
vglrun java -Dsun.java2d.opengl=True MyAppClass

See Chapter 13 for more details
Java2D applications that use OpenGL Solaris/Sparc When VirtualGL is used in conjunction with Java v5.0 (also known as Java 1.5.0) to remotely display Java2D applications using the OpenGL pipeline (see above), certain Java2D applications will cause the OpenGL subsystem to crash with the following error:

thread tries to access GL context current to another thread

If you encounter this error, try setting the SUN_OGL_IS_MT environment variable to 1 and re-running the application.
Java 5.0 should call glXInitThreadsSUN() since it is using multiple OpenGL threads, but it doesn’t. Purely by chance, this doesn’t cause any problems when the application is displayed locally. But VirtualGL changes things up enough that the luck runs out. This issue does not exist in Java 6.
Pro/ENGINEER Wildfire v2.0 Solaris/Sparc Add

graphics opengl

to ~/config.pro. You may also need to set the VGL_XVENDOR environment variable to "Sun Microsystems, Inc." if you are running Pro/ENGINEER 2.0 over a remote X connection to a Linux or Windows VirtualGL client.
Pro/E 2.0 for Solaris will disable OpenGL if it detects a remote connection to a non-Sun X server.
Pro/ENGINEER Wildfire v3.0 Solaris/Sparc When using Direct Mode, set the environment variable VGL_INTERFRAME to 0 on the VirtualGL server prior to launching Pro/E v3. Pro/E v3 frequently renders to the front buffer and, for unknown reasons, sends long sequences of glFlush() calls (particularly in wireframe mode) even if nothing new has been rendered. This causes VGL to send long sequences of duplicate images into the Direct Mode image pipeline. If interframe comparison is enabled, the overhead of comparing these duplicate images can lead to slow application performance when zooming in or out in Pro/E. It’s faster to disable interframe comparison in this case and simply let VGL’s frame spoiling system discard any frames that it can’t send in real time. This results in only a few of the duplicate frames being sent to the client with no CPU time wasted on comparing the hundreds of other duplicate frames that won’t be sent.
QGL (OpenGL Qt Widget) Linux vglrun -dl {application} Qt can be built such that it either resolves symbols from libGL automatically or uses dlopen() to manually resolve those symbols from libGL. As of Qt v3.3, the latter behavior is the default, so OpenGL programs built with later versions of libQt will not work with VirtualGL unless the -dl switch is used with vglrun.

See Chapter 13 for more details
Quake 3 Linux/x86 vglrun quake3 +set r_glDriver /usr/lib/librrfaker.so

or

vglrun -dl quake3
See Chapter 13 for more details
Soldier of Fortune Linux/x86 vglrun sof +set gl_driver /usr/lib/librrfaker.so

or

vglrun -dl sof
See Chapter 13 for more details
Unreal Tournament 2004 Linux/x86 vglrun -dl ut2004 See Chapter 13 for more details
VisConcept Solaris/Sparc Set the environment variable VGL_GUI_XTTHREADINIT to 0. Popping up the VirtualGL configuration dialog may cause the application to hang unless you set this environment variable. See Section 19.1 for more details.

16 Advanced OpenGL Features

16.1 Stereographic Rendering

The general idea behind VirtualGL is to offload the 3D rendering work to the server so that the client only has to draw 2D images. Normally, the VirtualGL and TurboVNC clients use 2D image drawing commands to display the rendered 3D images from the VirtualGL server, thus eliminating the need for a 3D graphics card on the client machine. But drawing stereo images requires a 3D graphics card, so such a card must be present in any client machine that will use VirtualGL with stereographic rendering. Since the 3D graphics card is only being used to draw images, it need not necessarily be a high-end card. Generally, the least expensive 3D graphics card that has stereo capabilities will work fine in a VirtualGL client.

The server must also have a 3D graphics card that supports stereo, since this is the only way that VirtualGL can obtain a stereo Pbuffer. When an application requests a stereo visual, VirtualGL will return a stereo visual to the application only if:

It is usually necessary to explicitly enable stereo visuals in the graphics card configuration for both the client and server machines. The Troubleshooting section below lists a way to verify that both client and server have stereo visuals available.

If, for any given frame, VirtualGL detects that the application has drawn anything to the right eye buffer, VGL will read back both eye buffers and send the contents as a pair of compressed images (one for each eye) to the VirtualGL client. The VGL client then decompresses the stereo image pair and draws it as a single stereo frame to the client’s display using glDrawPixels(). It should thus be no surprise that stereo performs, at best, only half as fast as mono, since VirtualGL must compress twice as much data on the server and use twice as much network bandwidth to send the stereo images to the client.

Stereo requires Direct Mode. If VirtualGL is running in Raw Mode and the application renders something in stereo, only the contents of the left eye buffer will be sent to the X display.

16.2 Transparent Overlays

Transparent overlays have similar requirements and restrictions as stereo. In this case, VirtualGL completely bypasses its own GLX faker and uses indirect OpenGL rendering to render the transparent overlay on the client machine’s 3D graphics card. The underlay is still rendered on the server, as always. Using indirect rendering to render the overlay is unfortunately necessary, because there is no reliable way to draw to an overlay using 2D (X11) functions, there are severe performance issues (on some cards) with using glDrawPixels() to draw to the overlay, and there is no reasonable way to composite the overlay and underlay on the VirtualGL server.

The use of overlays is becoming more and more infrequent, and when they are used, it is generally only for drawing small, simple, static shapes and text. We have found that it is often faster to send the overlay geometry over to the client rather than rendering it as an image and sending the image. So even if it were possible to implement overlays without using indirect rendering, it’s likely that indirect rendering of overlays would still be the fastest approach for most applications.

As with stereo, overlays must sometimes be explicitly enabled in the graphics card’s configuration. In the case of overlays, however, they need only be supported and enabled on the client machine.

Indexed color (8-bit) overlays have been tested and are known to work with VirtualGL. True color (24-bit) overlays work in theory but have not been tested. Use glxinfo (see Troubleshooting below) to verify whether your client’s X display supports overlays and whether they are enabled. In Exceed 3D, make sure that the “Overlay Support” option is checked in the “Exceed 3D and GLX” applet:

exceed6

Overlays do not work with X proxies (including TurboVNC.) VirtualGL must be displaying to a real X server on the client machine (either using Direct Mode or Raw Mode.)

16.3 Indexed (PseudoColor) Rendering

In a PseudoColor visual, each pixel is represented by an index which refers to a location in a color table. The color table stores the actual color values (256 of them in the case of 8-bit PseudoColor) which correspond to each index. An application merely tells the X server which color index to use when drawing, and the X server takes care of mapping that index to an actual color from the color table. OpenGL allows for rendering to Pseudocolor visuals, and it does so by being intentionally ignorant of the relationship between indices and actual colors. As far as OpenGL is concerned, each color index value is just a meaningless number, and it is only when the final image is drawn by the X server that these numbers take on meaning. As a result, many pieces of OpenGL’s core functionality, such as lighting and shading, either have undefined behavior or do not work at all with PseudoColor rendering. PseudoColor rendering used to be a common technique to visualize scientific data, because such data often only contained 8 bits per sample to begin with. Applications could manipulate the color table to allow the user to dynamically control the relationship between sample values and colors. As more and more graphics cards drop support for PseudoColor rendering, however, the applications which use it are becoming a vanishing breed.

VirtualGL supports PseudoColor rendering if a PseudoColor visual is available on the client’s display. A PseudoColor visual need not be present on the server. On the server, VirtualGL uses the red channel of a standard RGB Pbuffer to store the color index. Upon receiving an end of frame trigger, VirtualGL reads back the red channel of the Pbuffer and uses XPutImage() to draw the color indices into the appropriate X window. To put this another way, PseudoColor rendering in VirtualGL always uses Raw Mode. However, since there is only 1 byte per pixel in a PseudoColor “image”, the images can still be sent to the client reasonably quickly even though they are uncompressed.

PseudoColor rendering should work in VNC, provided that the VNC server is configured with an 8-bit color depth. TurboVNC does not support PseudoColor, but RealVNC and other VNC flavors do. Note, however, that VNC cannot provide both PseudoColor and TrueColor visuals at the same time.

16.4 Troubleshooting

VirtualGL includes a modified version of glxinfo that can be used to determine whether or not the client and server have stereo, overlay, or Pseudocolor visuals enabled.

Run one of the following command sequences on the VirtualGL server to determine whether the server has a suitable visual for stereographic rendering:

Solaris servers (using GLP)
/opt/SUNWvgl/bin/glxinfo -d {glp_device} -v
Solaris servers (not using GLP)
xauth merge /etc/opt/VirtualGL/vgl_xauth_key
/opt/SUNWvgl/bin/glxinfo -display :0 -c -v
Linux servers
xauth merge /etc/opt/VirtualGL/vgl_xauth_key
/opt/VirtualGL/bin/glxinfo -display :0 -c -v

One or more of the visuals should say “stereo=1” and should list “Pbuffer” as one of the “Drawable Types.”

Run one of the following command sequences on the VirtualGL server to determine whether the X display on the client has a suitable visual for stereographic rendering, transparent overlays, or Pseudocolor.

Solaris servers
xauth merge /etc/opt/VirtualGL/vgl_xauth_key
/opt/SUNWvgl/bin/glxinfo -v
Linux servers
xauth merge /etc/opt/VirtualGL/vgl_xauth_key
/opt/VirtualGL/bin/glxinfo -v

In order to use stereo, one or more of the visuals should say “stereo=1”. In order to use transparent overlays, one or more of the visuals should say “level=1”, should list a “Transparent Index” (non-transparent visuals will say “Opaque” instead), and should have a class of “PseudoColor.” In order to use PseudoColor (indexed) rendering, one of the visuals should have a class of “PseudoColor.”


17 Performance Measurement

17.1 VirtualGL’s Built-In Profiling System

The easiest way to uncover bottlenecks in the VirtualGL pipeline is to set the VGL_PROFILE environment variable to 1 on both server and client (passing an argument of +pr to vglrun on the server has the same effect.) This will cause VirtualGL to measure and report the throughput of the various stages in its pipeline. For example, here are some measurements from a dual Pentium 4 server communicating with a Pentium III client on a 100 Megabit LAN:

Server
Readback   - 43.27 Mpixels/sec - 34.60 fps
Compress 0 - 33.56 Mpixels/sec - 26.84 fps
Total      -  8.02 Mpixels/sec -  6.41 fps - 10.19 Mbits/sec (18.9:1)
Client
Decompress - 10.35 Mpixels/sec -  8.28 fps
Blit       - 35.75 Mpixels/sec - 28.59 fps
Total      -  8.00 Mpixels/sec -  6.40 fps - 10.18 Mbits/sec (18.9:1)

The total throughput of the pipeline is 8.0 Megapixels/sec, or 6.4 frames/sec, indicating that our frame is 8.0 / 6.4 = 1.25 Megapixels in size (a little less than 1280 x 1024 pixels.) The readback and compress stages, which occur in parallel on the server, are obviously not slowing things down. And we’re only using 1/10 of our available network bandwidth. So we look to the client and discover that its slow decompression speed (10.35 Megapixels/second) is the primary bottleneck. Decompression and blitting on the client do not occur in parallel, so the aggregate performance is the harmonic mean of the decompression and blitting rates: [1/ (1/10.35 + 1/35.75)] = 8.0 Mpixels/sec.

17.2 Frame Spoiling

By default, VirtualGL will only send a frame to the client if the client is ready to receive it. If a rendered frame arrives at the server’s queue and a previous frame is still being processed, the new frame is dropped (“spoiled.”) This prevents a backlog of frames on the server, which would cause a perceptible delay in the responsiveness of interactive applications. But when running non-interactive applications, particularly benchmarks, it is desirable to disable frame spoiling. With frame spoiling disabled, the server will render frames only as quickly as VirtualGL can send those frames to the client, which will conserve server resources as well as allow OpenGL benchmarks to accurately measure the frame rate of the VirtualGL system. With frame spoiling enabled, these benchmarks will report meaningless data, since they are measuring the rate at which the server can render frames, and that frame rate is decoupled from the rate at which VirtualGL can send those frames to the client.

In a VNC environment, there is another layer of frame spoiling, since the server only sends updates to the client when the client requests them. So even if frame spoiling is disabled in VirtualGL, OpenGL benchmarks will still report meaningless data if they are run in a VNC session.

There are only two ways to accurately benchmark an application in VirtualGL:

  1. Disable frame spoiling and use Direct Mode or Raw Mode with a “real” X server.
  2. Use TCBench (see below.)

To disable frame spoiling, set the VGL_SPOIL environment variable to 0 on the server or pass an argument of -sp to vglrun. See Section 19.1 for more details.

17.3 VirtualGL Diagnostic Tools

VirtualGL includes several tools which can be useful in diagnosing performance problems with the system.

NetTest

NetTest is a network benchmark that uses the same network I/O classes as VirtualGL. It can be used to test the latency and throughput of any TCP/IP connection, with or without SSL encryption. The VirtualGL Linux package installs NetTest in /opt/VirtualGL/bin. The VirtualGL Solaris package installs it in /opt/SUNWvgl/bin. The Windows installer installs it in c:\program files\VirtualGL-{version}-{build} by default.

To use NetTest, first start up the nettest server on one end of the connection:

nettest -server [-ssl]

(use -ssl if you want to test the performance of SSL encryption over this particular connection.)

Next, start the client on the other end of the connection:

nettest -client {server} [-ssl]

Replace {server} with the hostname or IP address of the machine where the NetTest server is running. Use -ssl if the NetTest server is running in SSL mode.)

The nettest client will produce output similar to the following:

TCP transfer performance between localhost and {server}:

Transfer size  1/2 Round-Trip      Throughput
(bytes)                (msec)        (MB/sec)
1                    0.176896        0.005391
2                    0.179391        0.010632
4                    0.181600        0.021006
8                    0.181292        0.042083
16                   0.181694        0.083981
32                   0.181690        0.167965
64                   0.182010        0.335339
128                  0.182197        0.669991
256                  0.183593        1.329795
512                  0.183800        2.656586
1024                 0.186189        5.245015
2048                 0.379702        5.143834
4096                 0.546805        7.143778
8192                 0.908712        8.597335
16384                1.643810        9.505359
32768                2.961701       10.551368
65536                5.769007       10.833754
131072              11.313003       11.049232
262144              22.412990       11.154246
524288              44.760510       11.170561
1048576             89.294810       11.198859
2097152            178.426602       11.209091
4194304            356.547194       11.218711

We can see that the throughput peaks at about 11.2 MB/sec. 1 MB = 1048576 bytes, so 11.2 MB/sec = 94 million bits per second, which is pretty good for a 100 Megabit connection. We can also see that, for small transfer sizes, the round-trip time is dominated by latency. The “latency” is the same thing as the 1/2 round-trip time for a zero-byte packet, which is about 0.18 milliseconds in this case.

CPUstat

CPUstat is available only in the VirtualGL Linux packages and is located in the same place as NetTest (/opt/VirtualGL/bin.) It measures the average, minimum, and peak CPU usage for all processors combined and for each processor individually. On Windows, this same functionality is provided in the Windows Performance Monitor, which is part of the operating system. On Solaris, the same data can be obtained through vmstat.

CPUstat measures the CPU usage over a given sample period (a few seconds) and continuously reports how much the CPU was utilized since the last sample period. Output for a particular sample looks something like this:

ALL :  51.0 (Usr= 47.5 Nice=  0.0 Sys=  3.5) / Min= 47.4 Max= 52.8 Avg= 50.8
cpu0:  20.5 (Usr= 19.5 Nice=  0.0 Sys=  1.0) / Min= 19.4 Max= 88.6 Avg= 45.7
cpu1:  81.5 (Usr= 75.5 Nice=  0.0 Sys=  6.0) / Min= 16.6 Max= 83.5 Avg= 56.3

The first column indicates what percentage of time the CPU was active since the last sample period (this is then broken down into what percentage of time the CPU spent running user, nice, and system/kernel code.) “ALL” indicates the average utilization across all CPU’s since the last sample period. “Min”, “Max”, and “Avg” indicate a running minimum, maximum, and average of all samples since cpustat was started.

Generally, if an application’s CPU usage is fairly steady, you can run CPUstat for a bit and wait for the Max. and Avg. for the “ALL” category to stabilize, then that will tell you what the application’s peak and average % CPU utilization is.

TCBench

TCBench was born out of the need to compare VirtualGL’s performance to other thin client packages, some of which had frame spoiling features that couldn’t be disabled. TCBench measures the frame rate of a thin client system as seen from the client’s point of view. It does this by attaching to one of the client windows and continuously reading back a small area at the center of the window. While this may seem to be a somewhat non-rigorous test, experiments have shown that if care is taken to make sure that the application is updating the center of the window on every frame (such as in a spin animation), TCBench can produce quite accurate results. It has been sanity checked with VirtualGL’s internal profiling mechanism and with a variety of system-specific techniques, such as monitoring redraw events on the client’s windowing system.

The VirtualGL Linux package installs TCBench in /opt/VirtualGL/bin. The VirtualGL Solaris package installs TCBench in /opt/SUNWvgl/bin. The Windows installer installs it in c:\program files\VirtualGL-{version}-{build} by default. Run tcbench from the command line, and it will prompt you to click in the window you want to measure. That window should already have an automated animation of some sort running before you launch TCBench.

TCBench can also be used to measure the frame rate of applications that are running on the local console, although for extremely fast applications (those that exceed 40 fps on the local console), you may need to increase the sampling rate of TCBench to get accurate results. The default sampling rate of 50 samples/sec should be fine for measuring the throughput of VirtualGL and other thin client systems.

tcbench -?

gives the relevant command line switches that can be used to adjust the benchmark time, the sampling rate, and the x and y offset of the sampling area within the window.


18 The VirtualGL Configuration Dialog

Several of VirtualGL’s configuration parameters can be changed on the fly once an application has started. This is accomplished by using the VirtualGL configuration dialog, which can be activated by holding down the CTRL and SHIFT keys and pressing the F9 key while any one of the application’s windows is active. This displays a dialog box similar to the following:

configdialog

You can use this dialog to enable or disable frame spoiling or to adjust the JPEG quality and subsampling. Changes are reflected immediately in the application.

Frame Spoiling
Clicking on this button will toggle frame spoiling on and off. If the button is highlighted (black), then frame spoiling is enabled.
Qual Preset: Broadband/T1
Clicking on this button will set the JPEG quality to 30 and the JPEG subsampling to 4:1:1, settings which will produce good performance on broadband connections (but at the expense of image quality.)
Qual Preset: LAN
Clicking on this button will set the JPEG quality to 95 and the JPEG subsampling to 4:4:4, settings which will produce perceptually lossless image quality (100 Mbit/sec switched LAN recommended.)
JPEG Quality
Click and drag the slider to change the JPEG quality to an arbitrary value between 1 and 100.
JPEG Subsampling
Click on any of the three buttons to change the JPEG subsampling. The highlighted (black) button indicates the current value.
Close Dialog
Close the dialog (you can also use the close gadget on the dialog window)

The JPEG quality and subsampling gadgets will only be shown if VirtualGL is running in direct mode. In raw mode, the only setting that can be changed with this dialog is frame spoiling.

The VGL_GUI environment variable can be used to change the key sequence used to pop up the dialog box. If the default of CTRL-SHIFT-F9 is not suitable, then set VGL_GUI to any combination of ctrl, shift, alt, and one of {f1, f2,..., f12} (these are not case sensitive.) For example:

export VGL_GUI=CTRL-F9

will cause the dialog box to pop up whenever CTRL-F9 is pressed.

To disable the VirtualGL dialog altogether, set VGL_GUI to none.

VirtualGL monitors the application’s X event loop to determine whenever a particular key sequence has been pressed. If an application is not monitoring key press events in its X event loop, then the VirtualGL configuration dialog might not pop up at all. There is unfortunately no workaround for this, but it should be a rare occurrence.


19 Advanced Configuration

19.1 Server Settings

You can control the operation of the VirtualGL faker in four different ways. Each method of configuration takes precedence over the previous method:

  1. Setting a configuration environment variable globally (for instance, in /etc/profile)
  2. Setting a configuration environment variable on a per-user basis (for instance, in ~/.bashrc)
  3. Setting a configuration environment variable only for the current shell session (for instance, export VGL_XXX={whatever})
  4. Passing a configuration option as an argument to vglrun. This effectively overrides any previous environment variable setting corresponding to that configuration option.
Environment Variable Name vglrun Command-Line Override Description Default Value
VGL_CLIENT -cl <client
display>
The X display where VirtualGL should send its image stream

When running in Direct Mode, VirtualGL uses a dedicated TCP/IP connection to transmit compressed images of an application’s OpenGL rendering area from the VirtualGL server to the VirtualGL client. Thus, the VirtualGL server needs to know on which machine the VirtualGL client software is running, and it needs to know which X display on that machine will be used to draw the application’s GUI. VirtualGL can normally surmise this by reading the DISPLAY environment variable (which lists the hostname and X display where all X11 traffic will be sent.) But in cases where X11 traffic is tunneled through SSh or another type of indirect X11 connection, the DISPLAY environment variable on the VirtualGL server may not point to the client machine. In these cases, set VGL_CLIENT to the display where the application’s GUI will end up. For example:

export VGL_CLIENT=my_client:0.0

If you are connecting to the VirtualGL server using SSh with X11 forwarding enabled, VirtualGL will try to guess an appropriate value for VGL_CLIENT based on the IP address of the SSh client, so you would only need to set VGL_CLIENT in this case if your configuration is unusual (such as if your client machine’s X server is occupying a display number other than 0 or if you are trying to forward VirtualGL’s image stream over SSh. See Chapter 9 for more details.)

** This option has no effect in “Raw” Mode. **
If SSh X11 forwarding is being used, VirtualGL will automatically set VGL_CLIENT to {ssh_client}:0.0, where {ssh_client} is the IP address of the machine from which the SSh connection was initiated. Otherwise, VGL_CLIENT is unset, which tells VirtualGL to read the client hostname and X display from the DISPLAY environment variable instead.
VGL_COMPRESS=0
VGL_COMPRESS=1
-c <0, 1> 0 = Raw Mode (send rendered images uncompressed via. X11),
1 = Direct Mode (compress rendered images as JPEG & send on a separate socket)

When this option is set to 0, VirtualGL will bypass its internal image compression pipeline and instead use XPutImage() to composite the rendered 3D images into the appropriate application window. This mode (“Raw Mode”) is primarily useful in conjunction with VNC, NX, or other remote display software that performs X11 rendering on the server and uses its own mechanism for compressing and transporting images to the client. Enabling Raw Mode on a remote X11 connection will result in uncompressed images being sent over the network, so it is unadvisable except on very fast networks (see Section 11.0.2.)

If this option is not specified, then VirtualGL’s default behavior is to use Direct Mode when the application is being displayed to a remote X server and to use Raw Mode otherwise. VirtualGL assumes that if the DISPLAY environment variable begins with a colon or with “unix:” (example: “:0.0”, “unix:1000.0”, etc.), then the X11 connection is local and thus doesn’t require image compression. Otherwise, it assumes that the X11 connection is remote and that compression is required. If the display string begins with “localhost” or with the server’s hostname, VGL assumes that the display is being tunneled through SSh, and its default behavior is to use Direct Mode in this case.

It is normally not necessary to set this configuration parameter unless you want to do something unusual (such as use Raw Mode over a remote X11 connection.) See Chapter 10 for more details.

NOTE: Stereo does not work with Raw Mode.

Compression enabled (“Direct Mode”) if the application is displaying to a remote X server, disabled (“Raw Mode”) otherwise.
VGL_DISPLAY -d <display or
GLP device>
The display or GLP device to use for 3D rendering

If your server has multiple 3D graphics cards and you want the OpenGL rendering to be redirected to a display other than :0, set VGL_DISPLAY=:1.0 or whatever. This could be used, for instance, to support many application instances on a beefy multi-pipe graphics server.

GLP mode (Solaris/Sparc only):

Setting this option to glp will enable GLP mode and use the first framebuffer device listed in /etc/dt/config/GraphicsDevices to perform 3D rendering. You can also set this option to the pathname of a specific GLP device (example: /dev/fbs/jfb0.) See Section 7.1 for more details.
:0
VGL_FPS -fps <floating
point number
greater than 0>
Limit the client/server frame rate to the specified number of frames per second

Setting VGL_FPS or passing -fps as an argument to vglrun will enable VirtualGL’s frame rate governor. When enabled, the frame rate governor will attempt to limit the overall throughput of the VirtualGL pipeline to the specified number of frames/second. If frame spoiling is disabled, this effectively limits the server’s 3D rendering frame rate as well. This option works regardless of whether VirtualGL is being run in Direct Mode (with compression enabled) or in Raw Mode (with compression disabled.)
Frame rate governor disabled
VGL_GAMMA=0
VGL_GAMMA=1
VGL_GAMMA=<gamma
correction
factor>
-g
or
+g
or
-gamma <gamma
correction
factor>
“Gamma” refers to the relationship between the intensity of light which your computer’s monitor is instructed to display and the intensity which it actually displays. The curve is an exponential curve of the form Y = XG, where X is between 0 and 1. G is called the “gamma” of the monitor. PC monitors and TV’s usually have a gamma of around 2.2.

Some of the math involved in 3D rendering assumes a linear gamma (G = 1.0), so technically speaking, 3D applications will not display with mathematical correctness unless the pixels are “gamma corrected” to counterbalance the non-linear response curve of the monitor. But some systems do not have any form of built-in gamma correction, and thus the applications developed for such systems have usually been designed to display properly without gamma correction. Gamma correction involves passing pixels through a function of the form X = W1/G, where G is the “gamma correction factor” and should be equal to the gamma of the monitor. So the final output is Y = XG = (W1/G)G = W, which describes a linear relationship between the intensity of the pixels drawn by the application and the intensity of the pixels displayed by the monitor.

VGL_GAMMA=1 or vglrun +g : Enable gamma correction with default settings

This option tells VirtualGL to enable gamma correction using the best available method. If VirtualGL is remotely displaying to a Solaris/Sparc X server which has gamma-corrected X visuals, then VGL will attempt to assign one of these visuals to the application. This causes the 3D output of the application to be gamma corrected by the factor specified in fbconfig on the client machine (default: 2.22.) Otherwise, if the X server does not have gamma-corrected X visuals or if the gamma-corrected visuals it has do not match the application’s needs, then VirtualGL performs gamma correction internally and uses a default gamma correction factor of 2.22. This option emulates the default behavior of OpenGL applications running locally on Sparc machines.

VGL_GAMMA=0 or vglrun -g : Disable gamma correction

This option tells VGL not to use gamma-corrected visuals, even if they are available on the X server, and disables VGL’s internal gamma correction system as well. This emulates the default behavior of OpenGL applications running locally on Linux or Solaris/x86 machines.

VGL_GAMMA={gamma correction factor} or vglrun -gamma {gamma correction factor} : Enable VGL’s internal gamma correction system with the specified gamma correction factor

If VGL_GAMMA is set to an arbitrary floating point value, then VirtualGL performs gamma correction internally using the specified value as the gamma correction factor. You can also specify a negative value to apply a “de-gamma” function. Specifying a gamma correction factor of G (where G < 0) is equivalent to specifying a gamma correction factor of -1/G.
VGL_GAMMA=1 on Solaris/Sparc VGL servers, VGL_GAMMA=0 otherwise
VGL_GLLIB The location of an alternate OpenGL library

Normally, VirtualGL loads the first OpenGL dynamic library that it finds in the dynamic linker path (usually /usr/lib/libGL.so.1, /usr/lib64/libGL.so.1, or /usr/lib/64/libGL.so.1.) You can use this setting to explicitly specify another OpenGL dynamic library to load.

Normally, you shouldn’t need to muck with this unless something doesn’t work. However, this setting is necessary when using VirtualGL with Chromium.
VGL_GUI Key sequence used to invoke the configuration dialog

VirtualGL will normally monitor an application’s X event queue and pop up the VirtualGL configuration dialog whenever CTRL-SHIFT-F9 is pressed. In the event that this interferes with a key sequence that the application is already using, you can redefine the key sequence used to pop up VGL’s configuration dialog by setting VGL_GUI to some combination of shift, ctrl, alt, and one of {f1, f2, ..., f12}. You can also set VGL_GUI to none to disable the configuration dialog altogether. See Chapter 18 for more details.
shift-ctrl-f9
VGL_GUI_XTTHREADINIT 0 to prevent VGL from calling XtToolkitThreadInitialize()

Xt & Motif applications are supposed to call XtToolkitThreadInitialize() if they plan to access Xt functions from two or more threads simultaneously. But rarely, a multi-threaded Xt/Motif application may avoid calling XtToolkitThreadInitialize() and rely on the fact that avoiding this call disables application and process locks. This behavior is generally considered errant on the part of the application, but the application developers have probably figured out other ways around the potential instability that this situation creates.

The problem arises whenever VirtualGL pops up its configuration dialog (which is written using Xt.) In order to create this dialog, VirtualGL creates a new Xt thread and calls XtToolkitThreadInitialize() as it is supposed to do to guarantee thread safety. But if the application into which VGL is loaded exhibits the errant behavior described above, suddenly enabling application and process locks may cause the application to deadlock. Setting VGL_GUI_XTTHREADINIT to 0 will remove VGL’s call to XtToolkitThreadInitialize() and should thus eliminate the deadlock.

In short, if you try to pop up the VirtualGL config dialog and notice that it hangs the application, try setting VGL_GUI_XTTHREADINIT to 0.
1
VGL_INTERFRAME=0
VGL_INTERFRAME=1
Enable/disable interframe image comparison

In Direct Mode, VGL will normally compare each image tile in the frame with the corresponding image tile in the previous frame and send only the tiles that have changed. Setting VGL_INTERFRAME to 0 disables this behavior.

Normally, you shouldn’t need to disable interframe comparison except in rare situations. This setting was introduced in order to work around a specific interaction issue between VirtualGL and Pro/ENGINEER v3. See Section 15 for more information.

** This option has no effect in “Raw” Mode. **
Inter-frame comparison enabled
VGL_LOG Redirect the console output from the VirtualGL faker to a log file

Setting this environment variable to the pathname of a log file on the VirtualGL server will cause the VirtualGL faker to redirect all of its messages (including profiling and trace output) to the specified log file rather than to stderr.
Print all messages to stderr
VGL_NPROCS  -np <# of CPUs>
or
-np 0
(automatically determine the optimal number of CPUs to use)
Specify the number of CPUs to use for multi-threaded compression

VirtualGL can divide the task of compressing each frame among multiple server CPUs. This might speed up the overall throughput if the compression stage of the pipeline is the primary bottleneck. The default behavior (equivalent to setting VGL_NPROCS=0) is to use all but one of the available CPUs, up to a maximum of 3 total. On a large multiprocessor system, the speedup is almost linear up to 3 processors, but the algorithm scales very little past that point. VirtualGL will not allow more than 4 processors total to be used for compression, nor will it allow you to assign more processors than are available in the system.

** This option has no effect in “Raw” Mode. **
1P system: 1
2P system: 1
3P system: 2
4P & larger: 3
VGL_PORT -p <port> The TCP port to use when connecting to the client

** This option has no effect in “Raw” Mode. **
4242 for unencrypted connections, 4243 for SSL connections
VGL_PROFILE=0
VGL_PROFILE=1
-pr
or
+pr
Enable/disable profiling output

If enabled, this will cause the VirtualGL faker to continuously benchmark itself and periodically print out the throughput of reading back, compressing, and sending pixels to the client.

See Chapter 17 for more details.
Profiling disabled
VGL_QUAL -q <1-100> An integer between 1 and 100 (inclusive)

This setting allows you to specify the quality of the JPEG compression. Lower is faster but also grainier. The default setting should produce perceptually lossless image quality.

** This option has no effect in “Raw” Mode. **
95
VGL_READBACK=0
VGL_READBACK=1
Enable/disable readback

On rare occasions, it might be desirable to have VirtualGL redirect OpenGL rendering from an application into a Pbuffer but not automatically read back and send the rendered pixels. Some applications have their own mechanisms for reading back the buffer, so disabling VirtualGL’s readback mechanism prevents duplication of effort.

This feature was developed initially to support running ParaView in parallel using MPI. ParaView MPI normally uses MPI processes 1 through N as rendering servers, each drawing a portion of the geometry into a separate window on a separate X display. ParaView reads back these server windows and composites the pixels into the main application window, which is controlled by MPI process 0. By creating a script which passes a different value of VGL_DISPLAY and VGL_READBACK to each MPI process, it is possible to make all of the ParaView server processes render to off-screen buffers on different graphics cards while preventing VirtualGL from displaying any pixels except those generated by process 0.
Readback enabled
VGL_SPOIL=0
VGL_SPOIL=1
-sp
or
+sp
Enable/disable frame spoiling

By default, VirtualGL will drop frames so as not to slow down the rendering rate of the server’s graphics engine. This should produce the best results with interactive applications, but it may be desirable to turn off frame spoiling when running benchmarks or other non-interactive applications. Turning off frame spoiling will force one frame to be read back and sent on each end-of-frame event, so that the frame rate reported by OpenGL benchmarks will accurately reflect the frame rate seen by the user. Disabling frame spoiling also prevents non-interactive applications from wasting graphics resources by rendering frames that will never be seen. With frame spoiling turned off, the 3D rendering pipeline behaves as if it is fill-rate limited to about 30 or 40 Megapixels/second, the maximum throughput of the VirtualGL system on current CPU’s.
Spoiling enabled
VGL_SSL=0
VGL_SSL=1
-s
or
+s
Tunnel the VirtualGL compressed image stream inside a secure socket layer

** This option has no effect in “Raw” Mode. **
SSL disabled
VGL_SUBSAMP -samp <411|422|444> 411, 422, or 444

This allows you to manually specify the level of chrominance subsampling in the JPEG compressor.

By default, VirtualGL uses no chrominance subsampling (AKA “4:4:4 subsampling”) when it compresses images for delivery to the client. Subsampling is premised on the fact that the human eye is more sensitive to changes in brightness than to changes in color. Since the JPEG image format uses a colorspace in which brightness (luminance) and color (chrominance) are separated into different channels, one can sample the brightness for every pixel and the color for every other pixel and produce an image which has 16 million colors but uses an average of only 16 bits per pixel instead of 24. This is called “4:2:2 subsampling”, since for every 4 pixels of luminance, there are only 2 pixels of each chrominance component. Likewise, one can sample every fourth chrominance component to produce a 16-million color image with only 12 bits per pixel. The latter is called “4:1:1 subsampling.” Subsampling decreases the amount of image data and thus increases the performance and decreases the network bandwidth usage, but subsampling can produce some visible artifacts. Subsampling artifacts are rarely observed with volume data, since it usually only contains 256 colors to begin with. But narrow, aliased lines and other sharp features on a black background will tend to produce artifacts when subsampling is enabled.

The Axis Indicator from a Popular Visualization App displayed with 4:4:4, 4:2:2, and 4:1:1 subsampling (respectively):
444422411

NOTE: If you select 4:1:1 subsampling, VirtualGL will in fact try to use 4:2:0 instead. 4:2:0 samples every other pixel both horizontally and vertically rather than sampling every fourth pixel horizontally. But not all JPEG codecs support 4:2:0, so 4:1:1 is used when 4:2:0 is not available.

** This option has no effect in “Raw” Mode. **
444
VGL_SYNC=0
VGL_SYNC=1
-sync
or
+sync
Enable/disable strict 2D/3D synchronization (necessary to pass GLX conformance tests)

Normally, VirtualGL’s operation is asynchronous from the point of view of the application. The application swaps the buffers or calls glFinish() or glFlush() or glXWaitGL(), and VirtualGL reads back the framebuffer and sends the pixels to the client’s display … eventually. This will work fine for the vast majority of applications, but it is not strictly conformant. Technically speaking, when an application calls glXWaitGL() or glFinish(), it is well within its rights to expect the OpenGL-rendered pixels to be immediately available in the X window. Fortunately, very few applications actually do expect this, but on rare occasions, an application may try to use XGetImage() or other X11 functions to obtain a bitmap of the pixels that were rendered by OpenGL. Enabling VGL_SYNC is a somewhat extreme measure that may be needed to get such applications to work properly. It was developed primarily as a way to pass the GLX conformance suite (conformx, specifically.) When VGL_SYNC is enabled, every call to glFinish() or glXWaitGL() will cause the contents of the server’s framebuffer to be read back and synchronously drawn into the client’s window without compression or frame spoiling. The call to glFinish() or glXWaitGL() will not return until VirtualGL has verified that the pixels have been delivered into the client’s window. As such, enabling this mode can have potentially dire effects on performance.
Synchronization disabled
VGL_TILESIZE A number between 8 and 1024 (inclusive)

Normally, in Direct Mode, VirtualGL will divide an OpenGL window into tiles of 256x256 pixels, compare each tile vs. the previous frame, and only compress & send the tiles which have changed. It will also divide up the task of compressing these tiles among the available CPUs in a round robin fashion, if multi-threaded compression is enabled. There are several tradeoffs that must be considered when choosing a tile size:

Smaller tile sizes:
  • Better parallel scalability
  • Worse compression efficiency
  • Better inter-frame optimization
  • Worse network efficiency
Larger tile sizes:
  • Worse parallel scalability
  • Better compression efficiency
  • Worse inter-frame optimization
  • Better network efficiency

Smaller tiles can more easily be divided up among multiple CPUs, but they compress less efficiently (and less quickly) on an individual basis. Using larger tiles can reduce traffic to the client by allowing the server to send only one frame update instead of many. But on the flip side, using larger tiles decreases the chance that a tile will be unchanged from the previous frame. Thus, the server may only send one or two packets per frame, but the cumulative size of those packets may be much larger than if a smaller tile size was used.

256x256 was chosen as the default because, in experiments, it provided the best balance between scalability and efficiency on the platforms that VirtualGL supports.

** This option has no effect in “Raw” Mode. **
256
VGL_TRACE=0
VGL_TRACE=1
-tr
or
+tr
Enable/disable tracing

When tracing is enabled, VirtualGL will log all calls to the GLX and X11 functions it is intercepting, as well as the arguments, return values, and execution times for those functions. This is useful when diagnosing interaction problems between VirtualGL and a particular OpenGL application.
Tracing disabled
VGL_VERBOSE=0
VGL_VERBOSE=1
-v
or
+v
Enable/disable verbosity

When in verbose mode, VirtualGL will reveal some of the decisions it makes behind the scenes, such as which code path it is using to compress JPEG images, which type of X11 drawing it is using, etc. This can be helpful when diagnosing performance problems.
Verbosity disabled
VGL_X11LIB the location of an alternate X11 library

Normally, VirtualGL loads the first X11 dynamic library that it finds in the dynamic linker path (usually /usr/lib/libX11.so.?, /usr/lib/64/libX11.so.?, /usr/X11R6/lib/libX11.so.?, or /usr/X11R6/lib64/libX11.so.?.) You can use this setting to explicitly specify another X11 dynamic library to load.

Normally, you shouldn’t need to muck with this unless something doesn’t work.
VGL_XVENDOR Return a fake X11 vendor string when the application calls XServerVendor()

Some applications expect XServerVendor() to return a particular value, which the application (sometimes erroneously) uses to figure out whether it’s running locally or remotely. This setting allows you to fool such applications into thinking they’re running on a “local” X server rather than a remote connection.

19.2 Client Settings

Environment Variables

Environment Variable Name Description Default Value
VGL_PROFILE=0
VGL_PROFILE=1
Enable/disable profiling output

If enabled, this will cause the VirtualGL client to continuously benchmark itself and periodically print out the throughput of decompressing and drawing pixels into the application window.

See Chapter 17 for more details.
Profiling disabled
VGL_VERBOSE=0
VGL_VERBOSE=1
Enable/disable verbosity

When in verbose mode, VirtualGL will reveal some of the decisions it makes behind the scenes, such as which code path it is using to decompress JPEG images, which type of X11 drawing it is using, etc. This can be helpful when diagnosing performance problems.
Verbosity disabled

vglclient Command-Line Arguments

vglclient Argument Description Default
-port <port
number>
Causes the client to listen for unencrypted connections on the specified TCP port 4242
-sslport <port
number>
Causes the client to listen for SSL connections on the specified TCP port 4243
-sslonly Causes the client to reject all unencrypted connections Accept both SSL and unencrypted connections
-nossl Causes the client to reject all SSL connections Accept both SSL and unencrypted connections
-l <log file> Redirect all output from the client to the specified file Output goes to stderr
-x Use X11 functions to draw pixels into the application window Use OpenGL on Solaris/Sparc or if stereo is enabled; use X11 otherwise
-gl Use OpenGL functions to draw pixels into the application window Use OpenGL on Solaris/Sparc or if stereo is enabled; use X11 otherwise