Creating Syzygy Progams
Creating Syzygy Programs
Table of contents
Application Frameworks
Syzygy programs are based on a framework or application object that manages the many tasks of a networked VR application. The framework object is responsible for program launching and shutdown. It also handles communications, rendering synchronization, graphics, sound, and input-event handling. The framework supports communication across different computer architectures and operating systems, so you can mix and match computers at will in your cluster. For example, a cluster could be composed of both big-endian and little-endian machines.
The only currently-supported application framework is the master/slave. In this framework the programmer is in charge of sharing data between instances of the application so it requires some care to produce an application that displays consistently
on all cluster render nodes. On the In a program based on the arMasterSlaveFramework
class, a copy of the application will run on each render node. We refer to such applications as master/slave because one instance of the application,
the master, controls the operation of the others, the slaves. Rendering is done with OpenGL commands written by the application programmer, although the framework handles the computation of the projection matrix and utility classes and functions are
available for loading certain kinds of content (e.g. texture images and 3-D models in obj format). The master/slave framework thus offers an easy migration path for existing applications to cluster-based operation; two of the included sample applications,
"atlantis" and "coaster", were easily ported to Syzygy from the GLUT distribution using the master/slave framework.
The master/slave framework class is a subclass of arSZGAppFramework
.
Samples can be found under szg/src/demo. Please see the examples chapter of this documentation for more information.
Framework Features
The framework has routines for setting the unit conversion factors for both rendering and sound production (both default to 1, i.e. default units are feet:
virtual arSZGAppFramework::void setUnitConversion( float program_units_per_foot ); virtual arSZGAppFramework::void setUnitSoundConversion( float program_units_per_foot );
Note that if the unit conversion factors are set, it should be done before the framework's init() method is called, because they may be required when reading in configuration database parameters are furing init().
Routines for setting the user's interocular separation and the near and far clipping planes:
void arSZGAppFramework::setEyeSpacing( float feet ); void arSZGAppFramework::setClipPlanes( float near, float far );
The clipping planes are always set by the application programmer, using the units specified by the unit conversion factors. The eye spacing, on the other hand, is read from the Syzygy database; if you require to set it in code, do it after the framework's init(), because otherwise it will be overridden by the value from the database.
Accessing data in Cluster Mode
Previously, program data had to be located in a subdirectory of a directory on the Syzygy data path (specified by the database variable SZG_DATA/path, see Syzygy Resource Path Configuration). This path is accessed using this method:
const string arSZGAppFramework::getDataPath()
and the path of a file in the my_app
subdirectory of a directory on this path can be found using:
string ar_fileFind( <fileName>, 'my_app', framework.getDataPath() );
Starting with Syzygy 1.1, most data files can be placed in the same directory as the program (again, see Syzygy Resource Path Configuration) and read using normal file-access methods with application-relative paths. For example, a file 'foo.txt' in the same directory as the application can be read using FILE* f = fopen('foo.txt','r'), even in cluster mode.
Things get a bit trickier when it comes to files that have to be opened by a Syzygy service. In a master/slave program only sound files fall into this category (they must be read and played by SoundRender); in a distributed scene graph program, any data file such as a texture map or a .obj model must be read by szgrender.
These files can still be placed with the application, but the application must then tell Syzygy where to find them. It does so using the frameworks' setDataBundlePath() method, passing in the group name of a Syzygy path variable and a subdirectory name.
Two examples:
- A C++ app, located in subdirectory 'my_app' in directory 'apps', which in turn is part of the SZG_EXEC/path variable. A set of sound files are also in the 'my_app' directory. Calling
framework.setDataBundlePath( "SZG_EXEC", "my_app" );
will allow SoundRender to find the files. - A Python program, located in a subdirectory 'my_app' of directory 'apps', which is listed in the Python search path SZG_PYTHON/path. The appropriate call would be:
framework.setDataBundlePath( "SZG_PYTHON", "my_app" )
Input Events
The framework provides methods for polling the current state of the various input events. Input events are distinguished by type (
button
,axis
, or matrix
) and index (a non-negative or unsigned integer). By convention, the head position and orientation are assumed to be contained in matrix event 0, the hand placement
in matrix #1, axis 0 is assumed to represent left/right movement of a joystick and axis 1 front/back joystick motion for driving.
int arSZGAppFramework::getButton( unsigned int index );
Returns the current state (0 or 1) of the named button. Button numbering starts at 0. However, the following to routines are more commonly used:
int arSZGAppFramework::getOnButton( unsigned int index ); int arSZGAppFramework::getOffButton( unsigned int index );
These return 0 or 1 depending on whether the specified button has been pressed or released since the previous frame.
float arSZGAppFramework::getAxis( unsigned int index );
Returns the value of the named axis. Axis events representing joysticks are assumed to lie between -1.0 and 1.0.
arMatrix4 arSZGAppFramework::getMatrix( unsigned int index );
Returns the value of the named matrix. Information about the arMatrix4 object can be found by examining szg/src/math/arMath.h.
The framework is capable of handling unlimited event indices, and also has methods for determining the number of event indices available:
unsigned int arSZGAppFramework::getNumberButtons(); unsigned int arSZGAppFramework::getNumberAxes(); unsigned int arSZGAppFramework::getNumberMatrices();
The framework provides a pointer to the current input state, which is not commonly used but provides more complete access to input state:
const arInputState* getInputState();
The framework supports installation of a single-event callback or filter:
bool arSZGAppFramework::setEventFilter( arFrameworkEventFilter* filter ); void arSZGAppFramework::setEventCallback( bool (*eventCallback)( arSZGAppFramework&, arInputEvent&, arCallbackEventFilter& ) );
These permit direct access to every single input event immediately as it comes in. They're functionally equivalent, the first is just a bit more object-oriented. Note that they run in a separate thread from the rest of your application, so only use if you're familiar with multi-threaded programming.
The framework supports the Syzygy navigation utilities.
Other Stuff
Speech (Windows only). If the Microsoft Speech API (SAPI) was present during compilation, the framework supports text-to-speech using the following method:
void arSZGAppFramework::speak( const std::string& message );
In Cluster Mode the utterance is performed by SoundRender, so that's the component that needs to be running on Windows for this to work. In Standalone Mode the sound component gets embedded into the application, so it has to be a Windows app. You can control the volume, pitch, etc. using embedded XML tags, see the SAPI documentation.
bool arSZGAppFramework::setInputSimulator( arInputSimulator* sim );
Installs your own version of the Input Simulator.
string arSZGAppFramework::getLabel();
Returns the name of your application.
bool arSZGAppFramework::getStandalone();
Is the program running in standalone (true) or cluster (false) mode?
arHead* arSZGAppFramework::getHead();
Get a pointer to the Head object, which contains information about eye spacing and so on. In a master/slave application it gets shared from master to slaves, so don't change parameters on slaves.
virtual void arSZGAppFramework::setFixedHeadMode(bool isOn) ;
Force fixed-head mode (but only for screens that are configured to allow it). See Graphics Configuration.
virtual arMatrix4 arSZGAppFramework::getMidEyeMatrix();
Return the placement (position+orientation) matrix for the midpoint of the two eyes.
virtual arVector3 arSZGAppFramework::getMidEyePosition();
Return the position of the midpoint of the two eyes.
arAppLauncher* arSZGAppFramework::getAppLauncher();
Get a pointer to the arAppLauncher, which contains information about the virtual computer the application is running on.
arGUIWindowManager* arSZGAppFramework::getWindowManager( void );
Get a pointer to the Syzygy window manager. Sorry, it isn't at all documented yet. You'll have to look at header files to figure out what you can do with it (look in src/graphics at any file whose name begins with "arGUI"...
arSZGClient* arSZGClient::getSZGClient() { return &_SZGClient; }
Get a pointer to the arSZGClient, which allows you to get and set parameters in the Syzygy database and to send messages to other Syzygy components. There are several sections devoted to the arSZGClient in the Distributed Operating System chapter.
Master/Slave Framework
Writing a master/slave program is conceptually similar to writing an OpenGL/GLUT program: Rendering is done by OpenGL calls that you write, and the application framework controls an event loop that calls callback functions that you define. The similarities are not accidental, as this framework was initially based on GLUT. It isn't any more, however.
NOTE: Starting with Syzygy 1.1, the GLUT headers are no longer automatically included in master/slave programs. You can still use a few of the GLUT rendering functions (the ones for drawing objects), but calling any of the other functions, such as those for window creation/manipulation, will crash your program. You will need to include the GLUT header if you want to use e.g. glutSolidCube() and similar functions. This is done differently on different platforms, so we have provided the new header file arGlut.h to handle this for you.
You can use an arMasterSlaveFramework in one of two ways. In the old way, during framework initialization you install callback functions to be called at specific points in the event loop. In the new, more object-oriented way, you create a sub-class of the arMasterSlaveFramework class and in it override the methods that call the callback functions. There is one exception, the single-event callback, which must be handled by installing a function.
The directory szg/skeleton represents a build template for master/slave applications. Copy that entire directory wherever you want (re-name it if you want). See the relevant section of Compiling C++ Programs for more information. szg/skeleton/src contains two files, skeleton.cpp and oopskel.cpp. These do exactly the same thing, but skeleton.cpp does it by installing callbacks and oopskel.cpp does it by sub-classing.
Now we'll list the callbacks roughly in the order in which they are called. The old-style callback function will be listed together with the new-style callback method. Note that in the former case, each callback function's signature includes a reference
(
fw
) to the framework object; in the latter case this is of course not necessary, as the callbacks are methods of the framework.
void arMasterSlaveFramework::setStartCallback( bool (*startCallback)(arMasterSlaveFramework& fw, arSZGClient& client) ); virtual bool arMasterSlaveFramework::onStart( arSZGClient& SZGClient );
Called to do application-global initialization. Must not do OpenGL initialization, as it is called before a window is created. If it does not return true, the application will abort.
void arMasterSlaveFramework::setWindowStartGLCallback( void (*windowStartGL)( arMasterSlaveFramework&, arGUIWindowInfo* ) ); virtual void arMasterSlaveFramework::onWindowStartGL( arGUIWindowInfo* );
This is where you do OpenGL initialization. Called once/window (your app may have multiple windows, this is specified in the Graphics Configuration), immediately after window creation.
void arSZGAppFramework::setEventQueueCallback( bool (*eventQueue)( arSZGAppFramework& fw, arInputEventQueue& theQueue ) ); virtual void arSZGAppFramework::onProcessEventQueue( arInputEventQueue& theQueue );
Called once/frame only on the master, to process buffered input events. the arInputEventQueue contains all events received since the previous frame. Note that this is only here for completeness; in practice it's always easier to perform the same tasks in the pre-exchange callback below, and if you really need immediate access to particular input events see the section on Advanced Input Event Handling below.
void arMasterSlaveFramework::setPreExchangeCallback( void (*preExchange)(arMasterSlaveFramework& fw) ); virtual void arMasterSlaveFramework::onPreExchange( void );
Called once/frame only on the master, after buffered input events have been processed and before data are transferred to the slaves. The current input state can be polled using the get<event_type>() methods described above. This is usually where user interaction is handled, after which data are packed into the framework for transfer to slaves.
void arMasterSlaveFramework::setPostExchangeCallback( void (*postExchange)(arMasterSlaveFramework& fw) ); virtual void arMasterSlaveFramework::onPostExchange( void );
Called once/frame on master and slaves after data transfer from the master. Some additional render-related processing can be done here.
void arMasterSlaveFramework::setWindowCallback( void (*windowCallback)( arMasterSlaveFramework& ) ); virtual void arMasterSlaveFramework::onWindowInit( void );
Prepare the window for rendering. The default behavior is to call the following function (which you can also call):
void ar_defaultWindowInitCallback() { glEnable(GL_DEPTH_TEST); glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE ); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); }
void arMasterSlaveFramework::setDrawCallback( void (*draw)(arMasterSlaveFramework& fw, arGraphicsWindow& win, arViewport& vp ) ); virtual void arMasterSlaveFramework::onDraw( arGraphicsWindow& win, arViewport& vp );
Called possibly multiple times/frame (actually, once/viewport) to draw a viewport. Note that a Syzygy viewport is a bit more than an OpenGL viewport; for example, it includes a specification of color buffers, so anaglyph (red/gree) stereo rendering is one using two Syzygy viewports. So is OpenGL hardware (stereo-buffered) stereo. This of course means that you shouldn't do any computations in this callback, only rendering: Nothing that will change the state of your application.
void arMasterSlaveFramework::setDisconnectDrawCallback( void (*disConnDraw)( arMasterSlaveFramework& ) ); virtual void arMasterSlaveFramework::onDisconnectDraw( void );
Called to draw the screen once/frame on slaves that are not connected to the master. This is for putting up a splash screen or whatever. Note that the window-init callback is not called, so you need to clear the window yourself.
void arMasterSlaveFramework::setPlayCallback( void (*play)(arMasterSlaveFramework& fw) );
Not sure when this is called.
void arMasterSlaveFramework::setWindowEventCallback( void (*windowEvent)( arMasterSlaveFramework&, arGUIWindowInfo* ) ); virtual void arMasterSlaveFramework::onWindowEvent( arGUIWindowInfo* );
Called once for each GUI window event (e.g. resizing, dragging, maximizing, etc. The passed structure is defined in szg/src/graphics/arGUIInfo.h.
void arMasterSlaveFramework::setExitCallback( void (*cleanup)( arMasterSlaveFramework& ) ); virtual void arMasterSlaveFramework::onCleanup( void );
Called before the application exits.
void setUserMessageCallback( void (*userMessageCallback)( arMasterSlaveFramework&, const std::string& messageBody ) ); virtual void arMasterSlaveFramework::onUserMessage( const string& messageBody );
Called whenever the application receives a user message (sent from e.g. the dmsg commandline program--see the Distributed Operating System chapter. The
message_type
should be the string 'user').
void arMasterSlaveFramework::setOverlayCallback( void (*overlay)( arMasterSlaveFramework& ) ); virtual void arMasterSlaveFramework::onOverlay( void );
Not sure what this is for.
void arMasterSlaveFramework::setKeyboardCallback( void (*keyboard)( arMasterSlaveFramework&, arGUIKeyInfo* ) ); virtual void arMasterSlaveFramework::onKey( arGUIKeyInfo* );
Called for keypresses when running programs in Standalone Mode. The arGUIKeyInfo structure is defined in szg/src/graphics/arGUIInfo.h.
void arMasterSlaveFramework::setMouseCallback( void (*mouse)( arMasterSlaveFramework&, arGUIMouseInfo* ) ); virtual void arMasterSlaveFramework::onMouse( arGUIMouseInfo* );
Called for mouse movement when running programs in Standalone Mode. The arGUIMouseInfo structure is defined in szg/src/graphics/arGUIInfo.h.
Note that strictly speaking, only the start callback is necessary. In other words, you could write a program that did not install any of the other callbacks and it would compile and, provided your start callback returned
true
, it would run. It just wouldn't look very interesting, and it might print out a warning on each frame to the effect that you hadn't set a draw() callback. It is up to you to decide which callbacks your application needs.
Sequence of Operations
An arMasterSlaveFramework application begins by calling the init method, passing in the command line parameters:
if (!framework.init(argc,argv)) return 1;
The application should quit if init fails (returns false). Next, the various callbacks are set, including the important start callback where shared memory is registered. Other application-specific initialization can also occur at this time, but as of Syzygy 0.8, OpenGL initialization should not be done in the start callback! OpenGL initialization must now be done in the windowStartGL callback. The start callback is now called before windows are created, whereas the windowStartGL callback comes after window creation. The old start callback was split like this because now Syzygy applications can open more than one window. The start callback is called once for the entire application and the windowStartGL callback is called once for each window.
Finally, the application should call the start() method to set the application in motion. It first executes the user-defined startCallback(...). If this callback returns false, the start() method returns false. Otherwise, it calls the user-defined windowStartGL() callback once for each graphics window (usually just one). Finally, it begins running an event loop defined by the other callbacks. As with init(...), if start() returns false then the application should terminate.
if (!framework.start()) return 1;
We now detail the event loop:
- Poll input devices: The master application instance is connected to input devices. Here, it copies the current values into memory so they can be exported via the getButton(...), getAxis(...), and getMatrix(...) methods. As it does this, it calls any
user-defined event filter or single-event callback installed using setEventCallback() on each event and caches the result. The use of these cached values ensures coherency in applications that depend on input device state in the event loop stages
occuring after shared-memory export.
- Call the user-defined eventQueue() callback: This is an alternative to the preExchange() callback. It provides the complete queue of events that have arrived since the last frame; otherwise it is functionally identical to preExchange(). Generally
speaking, if you need ensure access to every single event, it is easier to use the single-event callback in conjunction with preExchange(), but this callback is provided for those who prefer working with event queues.
- Call the user-defined preExchange() callback (master only): Take an action before shared memory is exported from the master application instance to the slave instances. Called only on the master. This is where you would normally put code to handle
user interaction and to do state-changing computations.
- Shared memory export: Shared memory is exported from the master application instance to the slave application instances. This includes both user-defined blocks of shared-memory and some system level infomation. The system material includes the current
time (milliseconds elapsed since initialization on the master) and the time needed to execute the last event loop. It also includes a navigation matrix and the cached input device values.
- Call the user-defined postExchange() callback: Take whatever action the user specified. Note that it is safe to query input device values here; the input state is automatically transferred to the slaves during the exchange.
- Call the user-defined sound (play()) callback: Play sounds. See the Sound API documentation for examples of how to make sounds.
- Call the user-defined draw() callback: Setup the matrix stack using the current head position and information about the screen configuration attached to this pipe. Then execute the user-defined draw callback.
- Synchronization: All connected application instances pause here until all are ready. A graphics buffer swap then occurs.
Data Transfer from Master to Slaves
Now, we examine the API in more depth, starting with the way the programmer registers shared memory. In the user-defined initCallback(...), the programmer should register shared memory. There are two kinds of shared memory: application-managed and framework-managed. Application-managed memory is fixed in size, whereas framework-managed is dynamic. The latter can be more convenient, but of course it means that you have to check the size in the slave instances before reading out the data.
Registering application-managed memory is one using the following method of the arMasterSlaveFramework object:
bool arMasterSlaveFramework::addTransferField(string fieldName, void* memoryPtr, arDataType theType, int numElements)
The parameter "fieldName" gives the memory a descriptive name. You pass in an already allocated pointer "memoryPtr" to a block of memory of type given by "theType" and of dimension "numElements". The data type needs to be one of AR_INT, AR_FLOAT, AR_DOUBLE, or AR_CHAR. Note that registering memory is done both in the master instance and the slave instances of the application. Once memory has been registered, the programmer uses the pointer normally, with awareness that the contents of the memory block are transfered from the master to the slaves in step 3 of the event loop.
As an example, the following statement registers a block of 16 floats:
framework.addTransferField("manipulation matrix", void* floatPtr, AR_FLOAT, 16);
Framework-managed shared memory is registered in a similar way:
bool arMasterSlaveFramework::addInternalTransferField( std::string fieldName, arDataType dataType, int numElements );
The pointer argument is omitted, and the numElements
argument now denotes the initial size. The size can be changed by calling:
bool arMasterSlaveFramework::setInternalTransferFieldSize( std::string fieldName, arDataType dataType, int newSize );
One then gets a pointer to the memory, on either master or slave, using:
void* arMasterSlaveFramework::getTransferField( std::string fieldName, arDataType dataType, int& numElements );
...with the actual size being returned in the numElements parameter, which is a reference.
There is a third, more advanced way to transfer data. Specifically, if you have a variable-sized set of objects of a class that you have defined, you can create an STL-type container for them that is easily synchronized between master and slaves, with objects automatically created and deleted on slaves to mirror the set on the master. See the comments in szg/src/framework/arMSVectorSynchronizer.h for details.
Time
The arMasterSlaveFramework objects also maintain consistent time across nodes. This can be consistently accessed after the shared-memory exchange step of the event loop.
double arMasterSlaveFramework::getTime()
Returns the time in milliseconds that have elapsed on the master since completion of initialization.
double arMasterSlaveFramework::getLastFrameTime()
Returns the time in milliseconds for the last iteration of the event loop. measured from one "poll input devices" step tp the next.
Sometimes it is necessary to determine if one is the master node or not. This is done by the following API call:
bool arMasterSlaveFramework::getMaster()
Returns whether or not this is the master application instance.
As mentioned above, this framework supports the navigation utilities. Any of the routines that modify this navigation may be used, but they should only be called on the master in the preExchange() callback. The framework automatically copies this matrix from the master to each of the slaves. As mentioned in the doc chapter on navigation, the frameworks have two navigation-related methods:
void arMasterSlaveFramework::navUpdate() void arMasterSlaveFramework::loadNavMatrix()
navUpdate()
, like other navigation-matrix modifying routines, should only be called on the master in preExchange().
loadNavMatrix()
, which loads the current navigation matrix onto the OpenGL matrix stack, should be called on all instances at the beginning of the drawCallback().
Finally, the arMasterSlaveFramework object includes an internal graphics database that uses the same API as that used in writing distributed scene graph applications. However, in this case, the scene graph database is not shared between master and slaves; each instance of the application has its own independent database. This functionality is included so that programmers can make use of arGraphicsDatabase features, like import filters for 3ds objects. Manipulation of the database can be done using the API outlined in the scene graph documentation chapter. Please note that the "dgSetGraphicsDatabase" command is not necessary in this context. This is automatically done by the framework object. Finally, we outline the one arMasterSlaveFramework method specifically tailored to this:
void arMasterSlaveFramework:draw() Draw the internal graphics database.
Finally, it should be possible to integrate master/slave applications with other libraries that themselves seek to control the event loop or on based on graphics system other than OpenGL. To make this possible, the programmer needs to issue the following call instead of start():
bool arMasterSlaveFramework::startWithoutGLUT()
As before, the program should abort if this call returns false. The programmer now has responsibility for calling (or causing to be called) a preDraw() method before each frame is drawn and a postDraw() method after each frame is drawn (but before the buffer swap command has been issued). Methods for retrieving the framework's computed projection and modelview matrices are also provided. This enables the programmer to directly manipulate the viewing API with which he is working.
void arMasterSlaveFramework::preDraw() Executes those parts of the event loop that occur before drawing. void arMasterSlaveFramework::postDraw() Executes those parts of the event loop that occur after drawing but before buffer swap (really just synchronization). arMatrix4 arMasterSlaveFramework::getProjectionMatrix() Returns the projection matrix calculated by the framework based on screen characteristics, head position, and head orientation. arMatrix4 arMasterSlaveFramework::getModelviewMatrix() Returns the modelview matrix calculated by the framework based on screen characteristics, head position, and head orientation.
Quick start for porting a GLUT application
Many GLUT applications can be easily turned into Master/Slave applications. A first step is creating a build environment that is compatible with Syzygy. The build template contained in szg/skeleton is discussed in the final section of the Compiling C++ Programs chapter.
szg/skeleton/src contains two simple master/slave programs. For some other examples, see the following demos in szg/src/demo:
- atlantis
- coaster
- hspace
- schprel
Here is a general overview of the steps necessary to do a quick and dirty port:
In the source file containing main(),
#include "arMasterSlaveFramework.h"
If you use any of the glut rendering functions (e.g. glutSolidTeapot()), also:
#include "arGlut.h"
Please note that these are the only legal GLUT functions in a Syzygy program. Use any of the window/event-related GLUT functions and your program will go down in flames.
In main(),
arMasterSlaveFramework framework; if( !framework.init( argc, argv ) ) { return 1; } framework->setStartCallback( ... ); framework->setWindowStartGLCallback( ... ); framework->setPreExchangeCallback( ... ); framework->setDrawCallback( ... ); framework->setKeyboardCallback( ... ); ...
Generally speaking, the init() function of your GLUT program should be split between the first two callbacks. Application-global initialization should go into the start callback (which should return true). OpenGL state initialization should go into the WindowStartGL callback. The start callback is called once in the body of the framework's start() method (which does not return) while the WindowStartGL callback is called once upon each window creation.
Sometimes computations and data exchanges need to occur before the scene is drawn. Computations whose results need to propagated from master to slave should occur in the framework's preExchangeCallback(...), which occurs before the data sharing exchange between master and slaves. These computations might, for instance, transform input events into navigational information. On the other hand, if each slave bases its actions on the next input event, then work might occur in the postExchangeCallback(...), which occurs after the data sharing exchange between masters and slaves.
Please note that only the master does the preExchange during an event loop, while the master and all slaves connected to a master do the postExchange. Unconnected slaves do not do the postExchange.
The user-defined display callback should go into the draw callback of the framework. This, and the WindowStartGL callback should be the only two places in the application where OpenGL calls are made.
The keyboard callback is only available when running in Standalone Mode; for running in a cluster, you'll want to convert your program to change state based on button events polled using e.g. the framework's getOnButton() method (see Programming) in the pre-exchange.
After the framework has been initialized and all necessary callbacks registered, the event loop needs to be set in motion:
if( !framework.start() ) { return 1; } // not reached, start() does not return unless an error occurred
If framework.start() is called from another thread, at the end of main do
while( true ) { ar_usleep( 1000000 ); }
so that the application doesn't immediately terminate.
If the application needs to control the event loop (and window creation) itself, the framework can be started as follows:
if( !framework.start( false, false ) ) { return 1; } // this is reached, this version of start() **does** return
Thereafter, you need to invoke the framework's preDraw() method before drawing and the postDraw() method after drawing but before buffer swapping.
General issues with the framework:
- Do not use GLUT commands to manipulate the OpenGL window(s). Let Syzygy handle that itself. If the application needs to be informed of window events (such as a resize, move, or close) the framework's WindowEvent callback can be used to get access
to such events.
- You can have access to keyboard events if you set a keyboard callback. Otherwise, you will not get these events, and, in general, you will not have access to mouse events. Instead you should use Syzygy's event processing (of VR-style events, matrices, joystick-type events, etc.)
Compiling C++ Programs
If you're planning on modifying the libraries, see also Compiler Idiosyncrasies
Notes for Windows Users
Windows users need to take some extra care in compiling their programs.
- If you are using Visual Studio 6 on Windows, you must compile with STLport (SZG_STLPORT=TRUE). Using STLport is described in Supporting Libraries. On the other hand, do not use STLport with Visual Studio .NET (its native standard template library is OK).
- Another note for Windows users, you must set up your environment variables so that the Visual Studio compiler (cl.exe) and the Visual Studio linker (link.exe) are on your path and can find their header files. There's an option to set these environment variables when you install Visual Studio. If it's too late for that, the proper way to set this is via the Windows control panel. Please note that, after setting the variables in the control panel, you will have to restart your cygwin or MinGW shell in order to access the new values there. The proper values for the environment variables are outlined in the Visual Studio Environment Variables section.
- If you are compiling using cygwin on Windows then there can be a problem. The Visual Studio linker is called "link" and some installs of cygwin may also install a "link" program. If, during your build, you see an error message like:
link error: too many arguments
You will know you are getting the cygwin "link" program by mistake. You can solve this problem by typing "which link", seeing where the cygwin "link" resides, and removing it, or you can make sure your PATH environment variable has the Visual Studio program in it first. - If you are using the MinGW g++ on Windows (environment variable SZG_COMPILER=MINGW) then you can only build static-link versions of the Syzygy libraries (environment variable SZG_LINKING=STATIC). The build will skip the arSpacepadDriver and arIntelGamepadDriver input device drivers. However, unlike the Microsoft compilers, MinGW g++ can compile the Syzygy Python bindings with static linking enabled.
Environment Variables
A number of environment variables control the build process.
- SZGHOME: The top level directory of your distribution. This MUST be defined.
- SZGBIN:
- If $SZGBIN is defined, then this is used as the directory for Syzygy executables and shared libraries that you compile.
- If $SZGBIN is not defined, there are two possibilities, depending upon whether or not the developer style is "EASY".
- If SZG_DEVELOPER_STYLE=EASY, then your executables and shared libraries will be placed in $(SZGHOME)/bin. No longer supported
- Otherwise, you are assumed to have a "developer" version of Syzygy and your executables will be placed in $(SZGHOME)/bin/$(MACHINE_DIR), where $(MACHINE_DIR) is one of darwin, linux, mips4, or win32.
- SZGEXTERNAL: The location of any external libraries used. Optional if GLUT is preinstalled. See this section.
- SZG_LINKING: Controls whether libraries are built as dynamic (shared) libraries or static. Must be either 'DYNAMIC' or 'STATIC' (defaults to 'DYNAMIC'). If DYNAMIC, then the Syzygy libraries will be built as a set of shared libraries (.dlls on Windows, .so files on Unix) that applications load at runtime. If STATIC, the libraries are linked into each executable at build time. The latter creates larger files but much less of a version-control headache. Note that scene-graph plugins and Python bindings can only be built with this set to DYNAMIC, and that STATIC is the only option when compiling with MinGW g++ on Windows.
- SZG_COMPILER: Currently effective only on Windows. Must be 'VC6' (Visual C++ 6, the default), 'VC7' (Visual C++ 7), or 'MINGW' (MinGW g++).
- SZGDEBUG: By default, executables and shared libraries are built without debugging information. If you want to debug, set this to TRUE.
- SZG_STLPORT: This is only used on Windows. If you are using Visual Studio 6, you must set this to TRUE (and have the STLPort headers in your SZGEXTERNAL directory). If you are using Visual Studio 7 (.NET), then you must set this to FALSE.
Using the Build Template
Syzygy has a build system designed for writing cross-platform applications, hiding the differences from the programmer. If your build directory is simultaneously mounted on different platforms, you can compile both versions of your code at once.
The directory szg/skeleton represents a build template for your applications. Copy and re-name that entire directory as you like. The source files go in skeleton/src. Modify skeleton/build/makefiles/Makefile.my_app to specify what gets built. After you've set things up, typing 'make' in the top-level directory will build your application (assuming you've compiled the Syzygy libraries first, of course). Typing 'make clean' in the same place will remove built executables and object files. The compiled executable will end up in skeleton/build/<platform> (e.g. skeleton/build/win32 on Windows) and in the directory pointed to by the SZGBIN environment variable. If SZGBIN is not set, $SZGHOME/bin/<platform> will be used.
The contents of szg/skeleton:
- szg/skeleton/Makefile: The overall Makefile for the template project. Scans the host to determine the platform and then executes the appropriate machine-specific Makefile.
- szg/skeleton/src: Your source code goes here. It contains two versions of a simple master/slave program (see the Programming chapter): skeleton.cpp uses the old method of installing event callback functions, oopskel.cpp achieves exactly the same behavior by sub-classing the arMasterSlaveFramework class and overriding callback methods.
- szg/skeleton/build/makefiles/Makefile.my_app: This is the fill you will edit to create your project. It's heavily commented, so in most cases it shouldn't be too hard to figure out how you need to modify it.
- szg/skeleton/build/darwin: Contains the Mac OS X Makefile and OS X objects. The machine-specific Makefile just sets the machine type and then includes the cross-platform Makefile.my_app.
- szg/skeleton/build/linux: Linux objects.
- szg/skeleton/build/mips4: Irix objects.
- szg/skeleton/build/win32: Windows objects.
Include-file hints
- Because Windows uses precompiled headers for speed, the first non-comment line in every .cpp file must be:
#include "arPrecompiled.h"
On Unix this does nothing. On Windows (using Visual C++, anyway), omitting this causes compile errors. - Within the Syzygy core, the last Syzygy include file in, say, drivers/*.cpp must be:
#include "arDriversCalling.h"
- Instead of platform-dependent OpenGL includes, do:
#include "arGraphicsHeader.h"
and if you want to use the GLUT rendering functions (e.g. glutSolidCube()):#include "arGlut.h"
- Each syzygy core class must be declared SZG_CALL. The exception is classes entirely contained within a .h file: no SZG_CALL for them, because that confuses Windows' linker.
Compiler Idiosyncrasies
We support many compilers. GNU g++ on every supported platform (imperfectly on Windows) and Microsoft Visual Studio 6 and 7 (.NET) on Windows. Users have successfully compiled with Visual Studio 2005. Occasionally we run into something that a particular compiler doesn't like, although it's sometimes quite difficult to figure out exactly what it doesn't like. Anyway, here's a list of things to avoid, particularly if you're modifying the Syzygy libraries themselves:
- Avoid multiple loop-variable initializations, e.g:
for (int i=0; i<N; ++i) { ... } for (int i=0; i<N; ++i) { ... }
Visual C++ 6 will barf. Instead, use:int i; for (i=0; i<N; ++i) { ... } for (i=0; i<N; ++i) { ... }
- Avoid complicated stuff in constructors:
string s("What's up, doc?");
is fine, but Visual C++ 6 MAY have problems withDataStructure* s( functionReturningDataStructure() );
Unless you're initializing something with a constant, it's probably better to use:DataStructure* s = functionReturningDataStructure();
- Be careful with the STL, especially STL algorithms: We have no idea why, but in one particular file a call to
copy(_consumeStack.begin(), _consumeStack.end(), _storageStack.end());
caused the application to crash when built with MinGW g++ (Windows), butlist<pair<char*,int> >::iterator iter; for (iter = _consumeStack.begin(); iter != _consumeStack.end(); ++iter){ _storageStack.push_back(*iter); }
made it all better.
Syzygy Python Programming
Important Change
As of revision number 1794 of July '09, the environment variables for building the python bindings have changed. Please read the linked section.
Background
The Syzygy Python bindings are built with SIP. We used to use SWIG, but SIP generates C++ code that's much easier to read, debug, and maintain, and in many instances it also runs faster.
On the minus side, the new bindings can only be compiled at the moment with g++, which means MinGW on Win32. They've been built using SIP 4.7.3, 4.12.2, and 4.13.2 (4.10, on the other hand, has a bug that prevents the build from working); MinGW g++ 3.4.2 and 4.5.2 (and various other versions on Linux); and Python 2.4-2.7.
Building the Bindings
- Get a reasonably recent version of Python. If you are using the Cygwin command shell on windows, make sure that it does not include its own version of Python. You must remove it if it exists, since cygwin's Python and Syzygy use incompatible object-file
formats (cygwin gcc vs. Visual C++). Install the native Windows version of Python from python.org.
- Set the environment variables SZGHOME, SZGBIN, SZGEXTERNAL, PATH, and any dynamic linker variables as described in Getting the Software. The
directory containing the Syzygy executables and libraries must be on your PATH and on the dynamic linker search path.
- Set the SZG_PYINCLUDE environment variable to the directory containing the Python.h header.
- Windows: If you e.g. installed Python 2.4 in c:\Python24 (the default), this would be
C:/Python24/include
(yes, forward slashes). - Linux: This would typically be something like
/usr/include/python2.5
.
- Windows: If you e.g. installed Python 2.4 in c:\Python24 (the default), this would be
- Set the SIP_INCLUDE environment variable to the location of the sip.h header.
- If you're on Linux or MacOS or using MinGW g++ on Windows, set the SZG_PYLIB environment variable. On Linux and MacOS, this is the word 'python' followed by the Python version number with a decimal point, e.g.
python2.5
; on Windows it should not include the decimal point, e.g.python24
. I know, it's ugly. This is the value for the -l linker argument. - On Windows set the SZG_PYLIB_PATH environment variable to the location of the Python link library: For example, with Python 2.4 in C:\python24 this would be
C:/Python24/libs/python24.lib
(also forward slashes). - Check that the directory containing the sip executable is on your command search path and add its directory if it isn't already there.
cd
to szg/python_sip and typemake
.make clean
cleans (deletes) things up. A number of files are generated/moved.- SIP creates the file python_sip/build/<platform>/sip_szgpart0.cpp containing the C++ source for the bindings.
- This gets compiled to _szg.pyd (Windows) or _szg.so (Unix). This file and python_sip/src/szg.py are copied to
$SZGBIN
.
- SIP creates the file python_sip/build/<platform>/sip_szgpart0.cpp containing the C++ source for the bindings.
Before Using the Bindings
...You'll also need to install the Python OpenGL bindings package, PyOpenGL. We've tried PyOpenGL 2.0.1.09 and 3.0.1. For old-style OpenGL programs PyOpenGL 2 is much faster (follow the "View older releases" link on the SourceForge download page).
Figuring Out How to Write Syzygy Python Programs
Um. Sorry, no Python API reference yet. Working on it.
There are a few demos in szg/python_sip/demos.
You can look at a webpage on Syzygy Python Programming. It's a bit out of date, e.g. referring to the old PySZG module instead szg, but mostly still accurate; eventually it'll be integrated with the Syzygy documentation. Refer to the corresponding programs in python_sip/demos/ to see how the new module works.
The only way to figure out what objects and methods exist is to look at the .sip source files in szg/python_sip/src. They're very similar to C++ header files, with some additional annotations. One thing to note, if a method parameter has the /Out/ annotation,
it will actually be returned by the Python wrapper method. For example, the arSZGClient
object has a method called
getAttributeVector3()
that returns a Syzygy database parameter as a 3-element vector. The entry in the file szg/python_sip/src/szgclient.sip looks like this:
bool getAttributeVector3( const string& groupName, const string& parameterName, arVector3& value /Out/ );
This means that the Python method should be called e.g. like this:
(statusBool, value) = szgClient.getAttributeVector3( 'SZG_HEAD', 'mid_eye_offset' )
Running Syzygy Python Programs in Standalone Mode
Besides looking in a standard location for imported modules (e.g. Python24\lib\site-packages\
on Windows), Python will search for modules in directories listed in the PYTHONPATH
environment variable. You must
set this variable to the path to the directory containing szg.py and _szg.pyd (or add the path to the variable if it already exists). Of course, it will also look in the same directory as a program for modules imported by that particular program.
We suggest putting data, textures, and sounds used by an application in the same directory (or a subdirectory) and opening them using relative paths.
Running Syzygy Python Programs in Cluster Mode
To understand this section, you should read about the Syzygy Cluster Mode.
Telling szgd Where Python Is
The preferred way to do this is to set the Syzygy database variable SZG_PYTHON/executable
to the full path to the python executable, e.g. on Windows it might be C:\Python24\python
(omit the '.exe') and on
Linux it could be the output of the which python
command. It must begin with one of the base paths passed as command-line arguments to szgd (we
suggest including the entire path to Python in the base paths szgd
argument, minus the '.exe' on Windows).
Telling szgd Where Your Program Is
There is a Syzygy database variable SZG_PYTHON/path that should be set to a semicolon-delimited list of absolute paths to directories. Each of these paths must begin with one of the base paths passed as command-line arguments to szgd. szgd
will
search for your program in each directory in this list and in all directories each one contains. This search only goes one level down the directory tree, i.e. it doesn't include subdirectories of subdirectories of directories in the
list. This allows you to place each program in a separate directory together with its data, without having to modify SZG_PYTHON/path
for each new program.
The Python Module Import Path
As mentioned above, Python uses the PYTHONPATH
environment variable as a search path for imported modules. The following Syzygy database variables are dynamically prepended to PYTHONPATH
(i.e. their current
values are prepended when a program is launched, then removed):
SZG_PYTHON/path
, SZG_PYTHON/lib_path
, and SZG_EXEC/path
. These paths will be searched in the order given. Note that only the directories listed are searched by Python, not subdirectories.
Where to Put Python Programs' Data, Sounds, etc.
This depends on the type of program. For master/slave programs, everything but sound files are read by the program itself in
Cluster Mode; sound files need to be read by the SoundRender program. When szgd
launches a program, it sets the current directory to the one containing the program. This means that if you put texture maps, .obj 3-D models,
and other kinds of data in the same directory as your program or a subdirectory thereof, as suggested above for Standalone Mode, you're good to go.
For sound files, you need to either (1) Copy them into a directory that's listed in the Syzygy database variable SZG_SOUND/path
, or (2) Tell SoundRender where to find them by calling the framework method setDataBundlePath()
as
described in the Programming chapter.
Using ``dex`` With Python Programs
szgd
identifies Python programs by suffix and invokes the Python interpreter. All Python programs must have the suffix '.py'. To run a Python program named 'foo.py' with command-line arguments <args> on a
virtual computer named `vc`, type:
dex vc foo.py <args>
Gotchas
You should be very careful to put the line
from szg import *
before any imports of PyOpenGL modules in your Python code. This is because the PyOpenGL modules try to load the GLUT dynamic library themselves from their local installation and this may conflict with the version of GLUT linked to Syzygy.
The Python cPickle module (for persistent storage of Python objects, either files or in strings) seems to be less cross-platform than advertised. Actually, we're not sure if this is a platform or a version issue; a file generated with cPickle on a Windows machine running Python 2.2 can't be un-pickled on a Linux machine running Python 2.3. The Python master/slave framework uses cPickle in the setObject method, for packing an arbitrary Python object into a string for transfer from master to slaves, so this implies that Python master/slave applications may run into trouble running on mixed-platform or -version clusters. We've run into a few other cases in which cPickle behaves strangely, throwing exceptions for no apparent reason; some of these can be avoided by using the slower pickle module instead (pickle is written in Python, cPickle in C). Or if security is a concern, you could used twisted.spread.banana and twisted.spread.jelly from the Twisted package. Pickle is a security risk, because an attacker could insert arbitrary commands in a string that would be executed on un-pickling; only use pickle if you control both ends of a transaction.
Writing Python Input Device Drivers
Python input devices only work in Cluster Mode.
This feature is specific to the new (SIP) bindings. Tersely, you implement a sub-class of arPyDeviceServerFramework
, minimally overriding the
configureInputNode()
method. This method can do a few things:
- Install one or more driver objects in the input node (required). In a Python program, these should be either instances of
arGenericDriver
if you're going to do all of the actual event generation in Python code orarSharedLibInputDriver
if you want to load one of the C++ "ar..Driver" shared libaries. If it's anarGenericDriver
, you'll want to save a reference to it. - Install an event filter. Only really needed if you're loading C++ shared-libary drivers.
- Add a network input source. This is what allows daisy-chaining of input devices to work.
In general you'll also want to have a loop that generates the actual events. Of course, these can be based on any information that Python can access: a real device via serial connection, a GLUT or wxPython GUI, or a web page, to name a few possibilities. There are a few simple examples in szg/python_sip/demo/inputdevices:
- randombuttons.py just sends random button events.
- bird_and_wand.py loads two C++ dynamic libraries, and is most interesting as an example of how to do e.g. axis scaling and button re-mapping in Python.
- glutjoystick.py is a GLUT-based joystick simulator, to demonstrate how to start writing alternatives to the standard
inputsimulator program. In extending it you might want to look at PyUI.
- event_console.py opens a mini command prompt in which you can type commands to send input events.
To use one in the context of a virtual computer, you generally have to do three things:
- Make sure you have defined the Syzygy database variable
SZG_PYTHON/executable
for the computer it's on. - Place it in a directory on your
SZG_PYTHON/path
(or one level down, see Path Configuration). - Add it to a virtual computer input map, e.g.:
virtual_computer SZG_INPUT0 map this_computer/glutjoystick.py
Then dex virtual_computer <app>
should cause the Python input device to start on this_computer.
Sound
Thanks to Camille Goudeseune for creating Syzygy sound support.
Syzygy sound support is based on the FMOD Sound System, copyright © Firelight Technologies Pty, Ltd., 1994-2006.
Sound in Syzygy is currently based on the no-longer-supported distributed scene graph. Eventually this will change.
If you are writing your own custom code instead of using the arMasterSlaveFramework objects (as all the demos included with this distribution do), first call dsSetSoundDatabase() before issuing any other API calls. The framework objects hide this detail.
arSoundDatabase soundDatabase; dsSetSoundDatabase(&soundDatabase);
(The prefix "ds" used by all functions in this API stands for "distributed sound". The distributed graphics API uses "dg" by analogy.)
An arSoundDatabase object receives a stream of data records causing it to alter its internal contents, just like arGraphicsDatabase. It similarly manages a tree of objects derived from class arSoundNode. Like arGraphicsNode, arSoundNode's three methods receiveData(), render(), and dumpData() respectively receive, play via a low-level sound library, and send sound data.
Sound state is synchronized across multiple machines by replicating arSoundDatabase objects on each machine. The master machine has an arSoundServer, while slaves have arSoundClients. (Only clients make sound API calls, just like only graphics clients make OpenGL calls.) The mechanism of synchronization is analogous to that of graphics.
If your syzygy program renders graphics and sound, you must build a tree of arSoundNodes somewhat in parallel to the tree of arGraphicsNodes. This is arguably cumbersome if the trees are "identical", but more often the sound tree looks like a subset of the graphics tree (not everything you see, you can also hear), with a few extra leaves near the root for ambient sounds not attached to particular visual objects.
Such a tree would start much like the graphics tree:
int transformNodeID = dsTransform("world", "root", theMatrix);
The ID returned by dsTransform() will differ from that returned by dgTransform(), so use a different variable to store it! The demos like src/demo/cubes/cubes.cpp use arrays of size 2 to store IDs which are "duplicates". You are of course free to use your own naming convention instead.
To modify the node's matrix, dsTransform(int ID, arMatrix4 theMatrix) behaves just like dgTransform(). If you want an arSoundNode to stay "attached" to a corresponding arGraphicsNode, make sure to call dsTransform() and dgTransform() together.
The only other important API call is
dsLoop(int ID, string filename, int fLoop, float loudness, arVector3 xyz)
Create the loop node as a child of a dsTransform() node; the parent node's matrix defines the loop node's coordinate system, just like a scene graph for graphics. dsLoop() plays a sound file "filename" (of format .wav or .mp3). Loudness is scaled by the scalar "loudness", 0 = silent, 1 = unity gain. The sound's 3D coordinates are given by "xyz". (The listener's position is handled by the arMasterSlaveFramework object.)
Having created a loop node, this call modifies the sound "loop" playing at that point. "fLoop" can have one of three values.
- 1: Start looping the sound continuously.
- -1: Trigger the sound (play it exactly once).
- 0: If it was looping, stop looping immediately.
3-D Object Files
Objects in Master/slave Programs
The Syzygy object-importing code was originally written to work only using the no-longer supported scene graph. This means that while you can use most of these objects in master/slave applications via the scene graph owned by each master/slave framework, it can be a pain in the behind.
We are gradually factoring the code to allow it to be used more simply without using the scene graph. So far only the OBJ-format objects have been liberated.
arOBJRenderer
The arOBJRenderer allows you to easily read and render an OBJ file in a master/slave program. See the Wavefront OBJ section for information about the format. Simply:
- Instantiate it:
arOBJRenderer myObj;
- Read in the file:
if (!myObj.readOBJ( const string& fileName, const string& subdirectory, const string& dataPath )) oops;
The dataPath argument can be a semicolon-delimited list of directory paths. You should read the file in both master and slaves. - Draw it:
myObj.draw();
New 07/08: arOBJRenderers render much more quickly now, because texture maps are mip-mapped by default. Also, if you want to load one into an OpenGL display list, you should call the newactivateTextures()
method first:myObj.activateTextures(); int dl = glGenLists(1); if (dl == 0) { oops; } glNewList( dl, GL_COMPILE ); myObj.draw(); glEndList();
This causes any texture maps to be downloaded to the graphics card, which you want to do before compiling a display list; otherwise, the action of loading the texture to the card will be compiled into the display list, resulting in slower performance than if you hadn't used a display list at all.
Other optional methods:
string arOBJRenderer::getName(); int arOBJRenderer::getNumberGroups(); void arOBJRenderer::clear(); void arOBJRenderer::normalizeModelSize(); float arOBJRenderer::getIntersection( const arRay& theRay ); arBoundingSphere arOBJRenderer::getBoundingSphere(); arAxisAlignedBoundingBox arOBJRenderer::getAxisAlignedBoundingBox(); arOBJGroupRenderer* arOBJRenderer::getGroup( unsigned int i ); arOBJGroupRenderer* arOBJRenderer::getGroup( const string& name );
Of particular interest are the getGroup()
and getBoundingSphere()
methods. The former returns an arOBJGroupRenderer that can be
rendered separately from the parent object (i.e. if an obj file contains named groups of faces, you can render the groups indvidually). The arOBJGroupRenderer object also has a getBoundingSphere()
method. These allow you to
do
frustum culling:
// draw callback float modView[16]; float proj[16]; glGetFloatv( GL_MODELVIEW_MATRIX, modView ); glGetFloatv( GL_PROJECTION_MATRIX, proj ); arMatrix4 frustumMatrix = arMatrix4( projectionMatrix ) * arMatrix4( modelViewMatrix ); if (myObj.getBoundingSphere().intersectViewFrustum( frustumMatrix )) { myObj.draw(); }
Note that getBoundingSphere() is an expensive operation, so normally you would compute the arBoundingSphere once, cache it, and use its
transform()
method to apply any constant transformations.
Supported Object Formats
Wavefront OBJ
Syzygy supports most of the official OBJ spec, and tries to fix some of the inconsistencies that programs tend to stick into exported files. A simple OBJ file looks like this:
# myfile.obj o object_name v -1 -1 -1 v -1 -1 1 v -1 1 -1 f 1 2 3 f 2 3 4 ...
Syzygy supports shading groups, normals, texture coordinates, material files, object names, and convex polygons. It does not support splines or raytracing options.
You can also specify an OBJ Material file, which usually ends in ".mtl". This file must be referenced from an .obj file via the "usemap" command. A material file will let you specify basic colors or textures for the .obj file. The format of an .mtl file is:
newmtl shaderName Kd 0.5 0.5 0.5 Ka 0.1 0.1 0.1 Ks 1.0 1.0 1.0 map_Kd texturefilename.ppm newmtl ...
Where shaderName is the name of the corresponding shading group in the .obj file, Kd is the diffuse coeefficient (base color), Ka is the ambient coefficient (background or fill light), and Ks is the specular coefficient (color of highlight). Currently the Ns, or specular power term, is unused since we are using basic OpenGL for rendering. If you specify a map with map_Kd, the texture specified will be used instead of Kd. All .mtl file parameters are optional, and have consistent default values.
Motion Analysis HTR
Used with our Motion Analysis motion capture setup and software, an HTR specifies a series of transformations frame by frame to define animations. By calling arHTR's attachMesh with an additional boolean true value, the HTR will use randomly colored line segments as its "bones":
((arHTR*)myObject)->attachMesh(my_name, my_parent, 1);
3D Studio format
We use lib3ds to read in 3ds files, so we can only use versions 3 and 4. If your .3ds file doesn't display or you see an error message when trying to read in the file, this is probably the reason. Simply convert it to version 3 or 4 (there are several free utilities out there), and the new .3ds should load into Syzygy. Basic materials and normals are supported.
Navigation
Background
There are two levels of support for navigation. First, math/arNavigationUtilities.{h,cpp} contains tools for setting and reading a global navigation matrix and for converting points, vector offsets, and matrices between input coordinates and navigation coordinates. You can use these routines even if you're not using one of the application frameworks. Second, the application frameworks also provide a navigation interface that handles automatic event-processing for navigation. This interface is currently based on the routines in arNavigationUtilities. New in 1.3: If you use the framework navigation interface, you can also control your application's navigation from a matrix event from the input event stream, e.g. from a Python input device driver or script.
The drawback is that you can't have any transformations between the navigation matrix and each object's placement matrix (placement = position + orientation), or transformations between input and navigation coordinates will fail. In other words, you can't have a transformation tree in which an object's placement is specified relative to a parent object; instead, all object placement matrices have to be computed with respect to a global coordinate system. We plan to eventually use the Syzygy graphics database (which each framework maintains a copy of) to control a transformation tree, with tools to calculate transformations between arbitrary nodes. Until then, so sorry, no transformation hierarchies if you want to use the Syzygy navigation and interaction tools.
arNavigationUtilities
These routines manipulate a global navigation matrix. The matrix itself is hidden inside a namespace (arNavigationSpace) and the accessor routines make use of a mutex to prevent multiple simultaneous accesses. Conceptually, you can treat the navigation matrix just like an object's placement matrix; for example, to translate the viewpoint to (0,0,-5) you would set the nav. matrix to ar_translationMatrix(0,0,-5). The inverse of the navigation matrix should be loaded into the OpenGL modelview matrix stack prior to rendering (the frameworks have routines for doing this).
void ar_setNavMatrix( const arMatrix4& matrix ); sets the navigation matrix. arMatrix4 ar_getNavMatrix(); returns the navigation matrix. arMatrix4 ar_getNavInvMatrix(); returns the inverse of the navigation matrix. This is computed once, then not recomputed again until the first request after one of the other routines changes the navigation matrix.
The following two routines modify the nav. matrix by appending a translation or a rotation.
void ar_navTranslate( const arVector3& vec ); void ar_navRotate( const arVector3& axis, float degrees );
These routines convert a placement matrix, a point, or a vector offset between input and navigation coordinates (a vector offset gets rotated but not translated).
arMatrix4 ar_matrixToNavCoords( const arMatrix4& matrix ); arVector3 ar_pointToNavCoords( const arVector3& vec ); arVector3 ar_vectorToNavCoords( const arVector3& vec ); arMatrix4 ar_matrixFromNavCoords( const arMatrix4& matrix ); arVector3 ar_pointFromNavCoords( const arVector3& vec ); arVector3 ar_vectorFromNavCoords( const arVector3& vec );
Framework-mediated Navigation
The application frameworks provide a simple interface for automatically converting input events into navigation commands. Navigation commands are handled by the arNavigationUtilities, input-event conversion is handled by the classes in the [ Interaction.html]interaction directory. New in 1.3: You can set a couple of Syzygy database parameters that will make the framework copy a specific matrix event from the input stream into the navigation matrix, allowing you to externally script your navigation.
Levels of behaviors
Three levels of behaviors are available:
- The frameworks have built-in default behaviors.
- These behaviors can be modified by setting a few parameters in the Syzygy database.
- The first two sets of behaviors can be overridden by calling framework methods from within an application.
- All of these are ignored if you set the database parameters that tell the framework to set the navigation matrix from the input event stream.
Default Behavior Modification
To determine behavior, you specify two things: the nature of the behavior and the condition required to trigger it. Two types of navigation behaviors are currently available. They are based on the current state of an input event, e.g. in the case of viewpoint translation you might hold a joystick at a fixed angle to travel in a fixed direction at a fixed speed.
Behavior Types
- Translation at constant speed along the x, y, and z axes. If the input device is tracked, the axes rotate along with the device, i.e. you point the device axis in the direction you want to go. If you're using an axis event to control the behavior (which can return negative values), then negative values will translate in the opposite direction.
- Rotation at a constant rate about the y axis. In this case y always refers to the vertical axis, no matter how the input device is oriented.
Triggering Conditions
A triggering condition consists of an event type, and event index, and a threshold value. For example, a condition of axis/0/0.2 would mean that the specified behavior would happen whenever the absolute value of axis event 0 exceeded 0.2. //Each condition is attached to a single behavior; attaching a new behavior to a condition removes the old one//. Currently it goes the other way as well, each behavior can only be attached to one condition at a time; that latter part will probably change in the near future. Multiple behaviors can be active simultaneously if their triggering conditions are all met.
Translation and rotation speeds are specified in feet/sec. and degrees/sec, respectively. Note that the translation and rotation behaviors use actual time measurements, so the speeds should be independent of frame rate. Note also that both speeds are scaled by the actual value of the triggering event, provided that value is between the threshold value and 1.0; in other words, if you're using a joystick that returns values that vary depending on how far you move it, the speed will vary accordingly, provided the driver scales the values to fall between 0 and 1.
Using framework-mediated navigation
To use these behaviors, you need only do the following:
- If you're creating an application using the Distributed Scene Graph Framework (see Programming), use the framework.getNavNodeName()
method to get the name of the navigation matrix node and attach all of your object nodes to it.
- Call framework.navUpdate() to process input events and update the nav. matrix. In a scene graph application, call it just before (3); in an app. based on the Master/Slave Framework (see Programming),
call it in the preExchange() callback.
- Call framework.loadNavMatrix() before rendering. In a scene graph app. call it just after (2) and before framework.setViewer() and setPlayer(); in a master/slave app call it near the beginning of the draw() callback.
For a scene graph application example, see demo/cubes; master/slave apps. that use framework-mediated navigation are demo/atlantis and demo/coaster.
Framework default behaviors
By default, the condition axis/1/0.2 triggers translation along the negative z axis (forwards) and axis/0/0.2 translates along the x axis.
Database parameters
Setting the following database parameters (on the control machine for a scene graph app., on the master machine for a master/slave app) will modify the default behaviors.
SZG_NAV/x_translation sets the trigger condition for translation in x. The first field must be either "axis" or "button", the second a positive integer or 0, and the third a positive floating-point value between 0 and 1, e.g. "axis/0/0.2". y_translation, z_translation, and y_rotation set the trigger conditions for the other behaviors analogously. You cannot initiate the world-rotation behavior from the database, that has to be activated by the application.
SZG_NAV/translation_speed and rotation_speed set the speed of the translation behavior (in feet/sec.) and the rotation behavior (in deg/sec) respectively. The translation speed is scaled by the framework's unit conversion factor (provided that was set before calling framework.init()), meaning that the translation speed in a particular application will correspond to the number of application units/second that map onto 5 feet/sec in input units (trust me, it makes sense).
SZG_NAV/effector allows you to specify the input event ranges to use for navigation. This is a 5-element '/'-delimited string of 0-or-positive integers. The first element is the matrix index of the tracking device attached to the navigation device. Elements 2 and 3 are the number of buttons and the starting button index, elements 4 and 5 are the number of axes and the starting axis index. This parameter defaults to 1/0/0/2/0, in other words, use matrix #1 (#0 is generally assumed to be attached to the head), no buttons, and axes 0 and 1 for navigation.
New in 1.3: Setting SZG_NAV/use_nav_input_matrix to "true" tells the framework to copy a matrix from the input event stream into the navigation matrix. The matrix event index to copy is specified by the value of SZG_NAV/nav_input_matrix_index, which defaults to 2.
Framework methods
Calling the following framework methods from within an application will override the two levels above.
bool setNavTransCondition( char axis, arInputEventType type, unsigned int index, float threshold );
and
bool setNavRotCondition( char axis, arInputEventType type, unsigned int index, float threshold );
set a translation or rotation condition, e.g.
framework.setNavTransCondition( 'z', AR_EVENT_AXIS, 1, 0.2 );
void setNavTransSpeed( float speed ); void setNavRotSpeed( float speed );
set the translation and rotation speed in application units/sec and degrees/sec, respectively. When modified from this end, the value isn't scaled by the framework conversion factor, but of course you can always do that yourself.
void setNavEffector( const arEffector& effector );
allows you to replace the navigation effector (a representation of the input device used for navigation).
void ownNavParam( const string& paramName );
tells the framework that the specified parameter should not be reloaded from the database (clobbering the value we've just set in code) in response to a "reload" message. The name is the database name without the SZG_NAV, e.g.
framework.ownNavParam( "translation_speed" );
Externally-scripted Navigation
New in 1.3: Setting SZG_NAV/use_nav_input_matrix to "true" tells the framework to copy a matrix from the input event stream into the navigation matrix. The matrix event index to copy is specified by the value of SZG_NAV/nav_input_matrix_index, which defaults to 2.
As an example, suppose you had several applications. For a demo of all of these applications you wanted the viewpoint to always face the origin while orbiting around it at a fixed distance. You could write a Python input driver that generated a stream of matrix events with index 2 and computed as in orbit.py.
Interaction
Thanks to Jim Crowell for creating the Syzygy interaction code support.
This page documents the classes for handling user interaction with virtual objects--more generally, conversion of input events to virtual world events. Virtual world events are divided into two categories: changes to an object's placement matrix (placement = position + orientation) and everything else. These classes are designed to provide a reasonably simple yet flexible way to handle this conversion, with some common behaviors encapsulated in the supplied classes.
Concepts & Classes
An interactable is an object capable of receiving virtual world events. These are instantiated in subclasses of the arInteractable abstract class. You can either create your own arInteractable subclasses or use the provided arCallbackInteractable class, which allows you to install function pointers to implement special behaviors.
An effector is a representation of a mobile physical input device, such as our wireless gamepad with a tracking sensor attached. A given input device can be represented by more than one effector if different parts of the device are used for different functions. These are represented by the arEffector class.
An interaction selector is an algorithm for determining which (if any) of a set of interactables a given effector will interact with. It's basically a distance measurement, the idea being that the effector will select the interactable that is closest to it by some measure. These are represented by subclasses of the abstract arInteractionSelector class. Examples are arDistanceInteractionSelector, which selects based on the Euclidean distance between effector and interactable, and arAlwaysInteractionSelector, which always allows interaction with any object it comes across (for distance-independent interaction).
A drag behavior is a placement-matrix-altering virtual world event. An example would be maintaining a fixed relationship with an effector over time as the effector is waved around. These are instantiated in subclasses of the abstract arDragBehavior class. An example of this is the arWandRelativeDrag, which implements the behavior just described.
A triggering condition or grab condition is a condition on the input event stream that must be satisfied for a drag behavior to be activated. These are represented by the arGrabCondition class. The only remaining class to mention is the arDragManager, which keeps track of which triggering conditions activate which drag behavior, and determines which drag behaviors should currently be active. Interactables and effectors both have drag managers. By default, the effector's drag manager is used, but this setting can be overruled on an interactable-by-interactable basis.
Usage Example: cubes.cpp
The following is a fairly detailed explanation of some snippets taken from src/demo/cubes.cpp. This is a Distributed Scene Graph application in which a cubical space around the user is filled with small objects of various shapes that rotate and change textures at random. The effect of the interaction code is to define a visible virtual wand that the user can wave around and use to drag any of the virtual objects with. When an object is dragged within two feet of the head, it temporarily takes the texture of the wand. We use the arCallbackInteractable class to mediate interaction with each of the objects; this is the easiest way to add interaction to a program that wasn't designed with interaction in mind.
Declarations
arCallbackInteractable interactionArray[NUMBER_CUBES]; std::list<arInteractable*> interactionList;
We define an array of arCallbackInteractables, one for each virtual object, and a list to hold a pointer to each element of the array.
arEffector dragWand( 1, 6, 2, 2, 0, 0, 0 ); arEffector headEffector( 0, 0, 0, 0, 0, 0, 0 );
We define two arEffectors to drive the interactions. Going through the constructor arguments for dragWand: it uses input matrix event #1 to determine its position and orientation. It maintains information about 6 button events. These start at input button event #2 and can be extracted using indices starting with 2; in other words, this effector will receive a copy of button events 2-7 which can be extracted from the effector using their input indices.
This requires a bit of explanation. An arEffector has the capability to remap the indices of ranges of input events. Consider the following scenario: The user is wearing two data gloves which you desire to function in exactly the same way. For example, you might map certain gestures into button events, and you want the same gesture to allow you to grab an object with either hand. This gesture will probably be mapped onto different button events in the input stream, depending on the originating hand. You can use the arEffector to remap these different events onto the same button event. E.g., if you'd defined 5 distinct gestures for each hand, corresponding to button events 0-4 for the right hand and 5-9 for the left, and the two hands were assigned placement matrices 1 and 2, then you might declare:
arEffector rightHand( 1, 5, 0, 0, 0, 0, 0 ); arEffector leftHand( 1, 5, 5, 0, 0, 0, 0 );
You would then be able to access e.g. button event #0 using rightHand.getButton(0) and button event #5 using leftHand.getButton(0) (which is a Good Thing if you want them to have the same effect).
The last three numbers are the equivalent for axis events, so in all these cases we're specifying that we won't be needing any.
The headEffector uses input matrix #0 (assumed to be read from a sensor attached to the user's head) and has no buttons or axes.
Initialization
First, in main() we specify some more things about the effectors:
dragWand.setInteractionSelector( arDistanceInteractionSelector( 5 ) ); headEffector.setInteractionSelector( arDistanceInteractionSelector( 2 ) );
This says that we want the object to be interacted with to be selected on the basis of the minimum Euclidean distance, with a maximum interaction range of 5 feet for the dragWand and 2 ft. for the headEffector. Normally we'd use a smaller interaction range than 5 ft., but in this instance I wanted it to be easy to get this to work with the simulator interface program inputsimulator on a non-stereo-enabled display.
dragWand.setTipOffset( arVector3(0,0,-WAND_LENGTH) );
Here we specify that the effector's "hot spot" (the point used in computing the wand-object distance by the interaction selector) will not be right at the position indicated by the effector's placement matrix, but will instead be offset forwards by WAND_LENGTH (2 ft.).
dragWand.setDrawCallback( drawWand );
This is how we make the virtual wand visible. I won't put the code in here, but if you look in cubes.cpp you'll see that we've defined a textured rod object that measures .2 ft x .2 ft x WAND_LENGTH. drawWand is a pointer to a function that uses the scene-graph function dgTransform() to modify this rod's placement matrix such that it always extends between the tracked input device and the effector hot spot.
dragWand.setDrag( arGrabCondition( AR_EVENT_BUTTON, 2, 0.5 ), arWandRelativeDrag() ); dragWand.setDrag( arGrabCondition( AR_EVENT_BUTTON, 6, 0.5 ), arWandRelativeDrag() ); dragWand.setDrag( arGrabCondition( AR_EVENT_BUTTON, 7, 0.5 ), arWandRelativeDrag() );
Here we specify that we want the arWandRelativeDrag behavior to occur whenever the value of buttons 2, 6, or 7 exceeds 0.5. This behavior makes the dragged object maintain a fixed relationship to the effector hot spot; translating the hotspot translates the object, rotating the wand causes the object to rotate about the hotspot.
Next, in worldInit(), we hook up each interactable to a virtual object (more specifically, to the virtual object's placement matrix, since that's what we want to modify via interaction):
arCallbackInteractable cubeInteractor( dgTransform( cubeParent, navNodeName, arMatrix4() ) );
Each arCallbackInteractable has an ID field. Here in one swell foop we add a matrix node to the scene graph (attached to the navigation matrix node, but that's another chapter) and assign its ID (returned by dgTransform() to the interactable.
cubeInteractor.setMatrixCallback( matrixCallback ); cubeInteractor.setMatrix( cubeTransform );
Here we set the interactable's matrix callback--this is a pointer to a function that gets called whenever the interactable's matrix is modified) to a function that copies the interactable's matrix into the database node with the interactable's ID. Then we go ahead and set the matrix to the pre-computed value.
cubeInteractor.setProcessCallback( processCallback );
Then we set the object's event-processing callback to a pointer to a function that changes the object's texture if it is touched by the headEffector and grabbed by the dragWand.
interactionArray[i] = cubeInteractor; interactionList.push_back( (arInteractable*)(interactionArray+i) );
Finally, we copy the interactable into the array and push its address onto the end of the list.
Interaction Loop
Once cubes starts running, things happen in two distinct threads. In the main() thread, we:
headEffector.updateState( framework->getInputState() ); dragWand.updateState( framework->getInputState() ); dragWand.draw();
Which copies the current state of the relevant input events from the framework into the effectors and then calls the drawCallback installed above. In the worldAlter() thread, we use the interactables in two ways:
interactionArray[iCube].setMatrix( randMatrix );
When not being interacted with, the virtual objects rotate randomly. We have to do all placement matrix manipulations via the corresponding interactable to ensure that the two placement matrices remain in sync with one another, so all modifications to the object's placement matrix are accomplished using the interactable's setMatrix() method.
ar_pollingInteraction( dragWand, interactionList ); ar_pollingInteraction( headEffector, interactionList );
Finally, we handle user interaction with virtual objects by each effector; this is where most of the work gets done. The function ar_pollingInteraction() is contained in arInteractionUtilities.cpp. There are two versions of it, the one used here which accepts a list of pointers to interactables as its second argument and another which accepts the address of a single interactable. It will be worthwhile to explain in some detail what this function does. First, a couple more concepts:
An object is touched when it has been selected for interaction by an effector's interaction selector. If it was not touched on the previous frame, then its touch() method is called; this in turn calls the virtual protected _touch() method, which you need to define in an arInteractable subclass. In the arCallbackInteractable subclass, this calls the optional touchCallback that you've installed. If, on the other hand, an object was touched on the last frame but isn't any longer, then its untouch() method is called (which analogously calls your _untouch() method or untouchCallback). An interactable can be touched by multiple effectors simultaneously; it can even be touched by one effector while it is grabbed by another (see below).
When an object is touched, it determines whether it satisfies a grab condition, either its own or the effector's depending on how the useDefaultDrags flag is set. If so, it //requests a grab// from the effector; if that succeeds, the object is grabbed by that effector. That object remains locked to that effector (the effector is forced to interact with that object only, and the object can't be grabbed by another effector) until any relevant grab conditions fail. While the object is grabbed, its placement matrix is modified by the drag behaviors associated with any active grab conditions. The object remains grabbed even if it gets far enough away from the effector that it would ordinarily no longer be touched. In fact, this was the initial reason for locking the effector and object together during a grab; if the grab were based only on a distance computation, then it would be possible to lose hold of an object by dragging it too quickly.
Finally, provided the object is still touched after all the rest of this occurs, the interactable's virtual _processInteraction() method is called (which calls the optional processCallback if it's an arCallbackInteractable).
Back to ar_pollingInteraction(). It behaves as follows:
(1) Check to see if this effector is grabbing an object. If it is but that object isn't one of the ones passed, return; if it is one of the ones passed, select it for interaction and interact with it, then return.
(2) Check to see if any passed object gets touched. If no object is touched but one was on the previous frame, or one is touched but it's not the same object the effector was touching on the last frame, call the previously-touched object's untouch() method.
(3) If there's a touched object, call its processInteraction() method. This first determines whether or not it was touched no the previous frame; if not, it calls the object's own touch() method. It then goes through the sequence described just above.
Portability Layer
Syzygy includes a portability layer which strives to make writing cross-platform Unix/Win32 applications as simple as possible. One Syzygy design goal is to ensure that as few platform specific #ifdef's
as possible occur in layers
of code above this one.
Network Sockets
The arSocket and arUDPSocket classes provide a portable sockets API. The arSocket class wraps the native TCP socket implementation on each platform, hiding the minor differences in function names or call signatures between Win32, Linux, Darwin, and Irix. When you create an arSocket object, you must specify whether it will be used to accept connections (AR_LISTENING_SOCKET) or transmit data (AR_STANDARD_SOCKET). For example:
arSocket* acceptSocket = new arSocket(AR_LISTENING_SOCKET); arSocket* dataSocket = new arSocket(AR_STANDARD_SOCKET);
Accepting a new connection via a socket looks like this:
acceptSocket.ar_accept( &dataSocket );
Each arSocket object has an associated numerical ID that is set by the programmer. The intent is that a manager object should be able to use these IDs to manipulate a set of sockets.
Some socket options are also set via class methods:
bool arSocket::setReceiveBufferSize(int size)
Sets the size of the TCP receive buffer.
bool arSocket::setSendBufferSize(int size)
Sets the size of the TCP send buffer.
bool arSocket::smallPacketOptimize(bool flag)
Disable Nagle's Algorithm iff flag is "true". Many TCP implementations enable Nagle's Algorithm, which reduces the performance of real-time applications that send small packets relatively slowly.
bool arSocket::reuseAddress(bool flag)
Only makes sense for a listening socket. If set to "true", then the socket can bind to a previously bound address.
Serial Ports
Syzygy provides the arRS232Port
class for uniform access to serial ports. A subset of the possible port parameters is supported, and the range of parameter values varies slightly between platforms.
Construct an arRS232Port
without any arguments. To open it, call:
bool arRS232Port::ar_open( const int portNumber, const unsigned long baudRate, const unsigned int dataBits, const float stopBits, const string& parity );
Port numbers start with 1 on both platforms, i.e. under Linux /dev/ttyS0 is port 1. The function returns a bool indicating success or failure. Currently supported parameter values are:
- baudRate: 9600, 19200, 38400, 57600, 115200 (to add additional values you will need to edit arRS232Port.cpp).
- dataBits: 4-8 (Win32); 5-8 (Linux).
- stopBits: 1,1.5,2 (Win32); 1,2 (Linux).
- parity : "none", "even", "odd", "mark", "space" (Win32); "none", "even", "odd", "space" (Linux).
To write to a serial port, use:
int arRS232Port::ar_write( const char* buf, const int numBytes );
which attempts to write numBytes bytes, or
int arRS232Port::ar_write( const char* buf );
which writes until a null character is reached. In either case, the function returns the number of bytes actually written or -1 on failure. Reading from a serial port is accomplished with:
int arRS232Port::ar_read( char* buf, const unsigned int numBytes );
On both platforms, this function will block until either a tenth of a second has passed or at least one character has been read. It repeats this step until either a user-specified timeout has been reached or numBytes bytes have been read. It returns the number of characters actually read. The timeout is set using:
bool arRS232Port::ar_setTimeout( const unsigned int timeout );
which takes a timeout value in tenths of a second and returns a bool indicating success or failure.
To flush any characters from the input and output buffers, use:
bool arRS232Port::ar_flushInput(); bool arRS232Port::ar_flushOutput();
To close the port, use:
bool arRS232Port::ar_close();
Threads
Threads are slightly different on Win32 and in the various Unix pthreads
implementations. Syzygy has a common abstraction, arThread, that wraps the lowest-common denominator features.
You create and start a thread as follows:
arThread myThread; void threadFunction( void* threadData ) { <thread task> } void* threadData = <pointer to data you want the thread to access> myThread.beginThread( threadFunction, threadData );
Note that this differs from pthreads, where threads are allowed to return void*.
Mutexes/Locks
There are two different but functionally-equivalent mutex classes. The newer, object-oriented arLock
class is easier to use:
arLock myLock; myLock.lock(); // block until you get ownership. bool isMine = myLock.tryLock(); // try to get ownership, but // return immediately. myLock.unlock(); // release the lock.
There is also the older arMutex
class, which is used as follows:
arMutex myMutex; ar_mutex_init( &myMutex ); ar_mutex_lock( &myMutex ); ar_mutex_unlock( &myMutex );
The main disadvantage is the requirement to call ar_mutex_init()
; if you try to use an un-inited arMutex
, your program will crash.
Condition Variables/Signals/Events
NOTE: This section is particularly weak. Re-write!!
Syzygy presents a lowest-common denominator abstraction for signals and condition variables.
Signals are implemented in the arSignalObject
class. An arSignalObject
enters a signalled state when its sendSignal()
method is called. It remains in a signalled state until either its reset()
or
receiveSignal()
method is called, at which time it returns to unsignalled. receiveSignal()
blocks until another thread calls
sendSignal()
.
Syzygy condition variables, as implemented by arConditionVar, work like pthreads condition variables except that only a single waiting thread can be awakened by a signal()
call.
There is also an arThreadEvent class, based on the EVENT class of Walmsley, "Multi-threaded Programming in C++".
Time
Syzygy provides a uniform way to query the system time. It defines a time struct:
struct ar_timeval { int sec; int usec; // microseconds };
It also provides functions for querying the time and calculating the difference of two times:
struct ar_timeval ar_time();
Returns the current system time.
double ar_difftime( struct ar_timeval t2, struct ar_timeval t1 )
Returns the number of microseconds from t1 to t2.
Shared random numbers in the master/slave framework
Background
The master/slave framework has a facility for generating pseudo-random numbers that is meant to be independent of whether it is called on a master or a slave. However, it is possible to use it in such a way that the random number sequences get out of sync for brief periods. This note describes its behavior and makes suggestions about how to avoid such errors. Please note that the framework will also detect such errors and print an error message to the console if they occur.
The following relevant data are transmitted from the master to the slaves on each frame:
- The current value of the random number seed. (_randomSeed)
- A flag indicating whether or not the seed has been set by the user. (_randSeedSet)
- The number of calls to the random number generator since the last data exchange. (_numRandCalls)
- The number generated by the last such call. (_lastRandVal)
_randSynchError = 0; if (!_firstTransfer) { if (lastNumCalls != _numRandCalls) _randSynchError |= 1; if ((lastSeed != _randomSeed)&&(!_randSeedSet)) _randSynchError |= 2; if (tempRandVal != _lastRandVal) _randSynchError |= 4; } else _firstTransfer = 0; _numRandCalls = 0;
Functions
void arMasterSlaveFramework::setRandomSeed( const long newSeed )
Sets the random seed on the master. The change takes effect only just before the next data transfer, so don't start generating random numbers too early, like in the framework's init() callback. A value of 0 for the seed is illegal and will be converted to -1 with a printed warning.
bool arMasterSlaveFramework::randUniformFloat( float& value )
Generates a new pseudo-random number using the ran1 algorithm from Numerical Recipes in C. Values are uniformly distributed in the unit interval, excluding 0 and 1 themselves. It returns a bool indicating if _randSynchError is 0. As side-effects, it (1) resets _randSynchError to 0, so there is at most one error message per frame; (2) modifies the seed; (3) increments a counter, which is reset to 0 after each master/slave data transfer.
Getting into trouble
The random number sequences will become desynchronized under these conditions, and possibly others (mind you, the framework will tell you that you've gotten into trouble):
- Calling randUniformFloat() differently on masters and slaves. For example, this will desynchronize the master:
if (framework->getMaster()) dorandomstuff()
- Starting calls to randUniformFloat() based on user input in the preExchange() callback. In preExchange(), only the master sees user input, so this is a special case of the previous case. This will desynchronize:
preExchange() { if (framework->getButton(0)) startRandom = true; } postExchange() { if (startRandom) startusingrandomnumbers() }
Either move the body of preExchange() into postExchange(), or use the library function ar_randUniformFloat() to generate random numbers on the master for manual sharing with the slaves. - Calling randUniformFloat() in more than one concurrent thread. Use ar_randUniformFloat() and share the generated data instead.
- Calling randUniformFloat() before all program instances are running. Because Syzygy is designed around the premise that no program instance should care whether or not other instances are running (provided there is a master instance somewhere), this can only be determined by the user. The user should be required to indicate by e.g. a button press that all instances are running before randUniformFloat() is called.