Yes, and more than that, they’ll meet in a virtual space where they
can work together, just as I can reach across this table and grab
your sugars. You’re not using them, are you?
No, not at all. Go ahead and take them. So you mean that users will
put on some kind of bulky head mounted display and VR gloves so
that they can all be in the same virtual space?
Not likely. Some users will visualize together, physically, in a
Vision Dome or CAVE. Many will work with standard LCD and CRT monitors,
while still others will slip on a personal display, no more bulky
than a pair of sunglasses. And while the conventional mouse will
certainly give way to 3D input devices, these will be non-intrusive,
like wearing a watch.
Do the applications exist to support these exotic devices?
There certainly are applications that can utilize these devices,
but more importantly there are standards and toolkits that can exploit
these devices and upon which applications can be developed.
Standards are important. They allow applications to be easily extended
and to interoperate with other applications, not to mention the
standard benefit of reusable code. Hey, that was funny, "standard
benefit", the benefit of subscribing to a standard. Pretty
good pun, huh?
I think the Turkish coffee has heightened your sense of humor. But
seriously, what kind of standards are you talking about?
After many years of bickering and infighting over 3D standards such
as Direct3D, OpenGL, and a plethora of predecessors, the industry
is finally settling down to just a few.
Aren’t those standards pretty low level, concerned primarily with
Indeed they are, but they’re not the end of the road. These early
industry standards were concerned with rendering things like points
and lines and polygons in 3D space, lighting them, and applying
They weren’t about 3D objects, like cars and houses and people and
No, those would be constructed by the graphics programmer out of
the geometric elements.
Wouldn’t that take a lot of time then, to construct a whole scene?
It sounds like assembly language programming?
Precisely. And the graphics industry responded with a higher level
solution, often called scene management. They defined a data structure
called a scene graph that would represent the objects in a scene,
their geometrical relationships, and all their attributes.
Sounds like a database to me.
In fact the scene managers are databases, but unlike databases used
in the corporate world for record keeping these databases needed
to perform exceedingly quickly. For a user to navigate through a
virtual world at a smooth 30 frames per second the scene database,
as it’s sometimes called, would need to be traversed 30 times per
Wouldn’t disk access be much too slow?
Yes, much too slow. So these scene databases resided in memory,
not on disk, as optimized hierarchical structures. Very much like
Hierarchical? These days only legacy systems and very high performance
applications with fixed access patterns use hierarchical databases.
There are many drawbacks to them and they’ve been all but eclipsed
by relational systems.
To achieve realistic frame rates on the processors of the time the
hierarchical structure was necessary, and pretty much remains necessary.
And so hierarchical scene managers became incorporated in most every
3D viewer, browser, and application.
Whose hierarchical database system do these 3D applications use
to manage their scene data?
Each vendor pretty much has used their own scene management routines.
In many cases vendors like SGI have provided toolkits that sit on
top of the lower-level rendering engines, like OpenGL, a popular
low level API. These higher level toolkits supply the scene management
routines that the developer of a 3D application utilizes. Examples
from SGI include Open Inventor and Iris Performer. Many 3D applications,
including VR and VizSim have been built with these toolkits.
So there is some degree of defacto standardization?
To a point, but nothing like what’s coming. SGI has partnered with
Microsoft on the Fahrenheit initiative. Over the next 2-3 years
Fahrenheit will provide a low-level rendering standard which will
replace Microsoft’s Direct3D, while OpenGL will continue on. But
Fahrenheit will also have a higher level "scene manager"
standard that will run on top of both OpenGL and the Fahrenheit
low level rendering system. And then there will be standard "extensions"
to the scene manager to handle specific needs, like large model
visualization needed by CAD vendors, large image handling used by
GIS developers, and VRML plug-ins for virtual reality.
So a single standard is emerging for 3D?
Not quite. While it’s true that many vendors of CAD, GIS, VizSim,
and VR products have endorsed the Fahrenheit initiative, and will
rewrite their applications to utilize Fahrenheit, a competing technology
from Sun Microsystems is also in the running.
That’s Java isn’t it?
A particular extension to Java known as Java3D – it’s part of the
Java Media Extensions. Java3D omits the low level rendering, turning
to OpenGL, say, for that task. But from there on up Java3D defines
its own way of managing scenes and providing the high level functions
that those special purpose Fahrenheit extensions do. In all, Sun
has defined Java3D to handle everything from video games to the
most powerful simulations and virtual environments. So there’s not
a single scene management standard.
What about VRML?
VRML will continue on as a common file format for representing a
3D scene in a portable vessel that can be moved from machine to
machine. The VRML community is also investing in standards compliance
so that a red apple will look the same shade of red regardless of
whose VRML viewer and what rendering software and hardware you’re
using. And in time most VRML viewers and products will sit on either
Fahrenheit or Java3D. So the VRML effort is complementary to Fahrenheit
So in a few years most 3D applications will be managing their scene
data in one of two scene managers. Then the collaborative applications
I mentioned a few minutes ago can share their data, with one application’s
scene database talking to another application’s scene database.
The only complication will be in translating between Fahrenheit
That would seem to be the case. Am I correct in assuming that these
scene databases in Fahrenheit and Java3D are real databases, hierarchical
though they are, but still offering support for transaction semantics,
multi-user concurrency, security, recovery, not to mention scalability
and support for distributed data?
Afraid not. As databases go these scene managers are rather immature
if not positively primitive. They’re just beginning to implement
concepts like locking, and that only at the high end with the large
model visualization extensions. So no, these two standards and their
scene managers cannot interoperate, even with translation. In fact,
there’s no scene graph level facility for one Fahrenheit scene graph
to talk to another, and the same goes for Java3D. But if I may directly
address our user’s question. Vendors seeking to support multiple
users all occupying the same 3D virtual world for collaboration
have turned to ad hoc methods outside of the scene database. And
unfortunately each vendor has used their own ad hoc methods. So
a user of one vendor’s technology cannot cross over into a virtual
world supported by another vendor’s technology. There’s no interoperability.
And the standards efforts, such as the "Living Worlds"
proposal in the VRML Consortium, seem to only be supported in products
from the submitter of the standard, which is Sony in the case of
the Living Worlds proposal.
Sounds like these 3D scene managers should manage their scene data
in commercial off the shelf data managers that comply with published
interface standards, like SQL and the ODMG languages for object
databases. Then they’d interoperate and offer all the benefits of
databased applications. For example, if 3D scene data was managed
in a scene database that provided recovery services, then should
a computer fail while users were collaborating, say modifying the
positions of tanks in a battlefield simulation, then these changes
would not be lost, they’d could be recovered from the logs automatically.
Well they, the current 3D applications, certainly don’t behave like
that now. In most cases every time a scene is reloaded into a viewer
the scene reverts back to how it started – changes are not persistent.
Several 3D vendors are working on ad hoc ways to populate their
scene databases according to data in external databases of the traditional
sort. They hope to achieve some measure of persistence, security,
and scalability that way. In fact, the VRML Consortium, in a move
driven by Oracle, recently offered a VRML to database connectivity
specification as recommended practice.
You mean these 3D vendors are trying to create, no, recreate,
complete database functionality, perhaps by populating and then
modifying their scene databases from data in external databases?
Looks that way. They don’t expect to deliver anything more than
basic functionality even a few years out.
Of course they don’t. Designing and implementing robust, full-featured
database management systems requires thousands of man years investment.
Even new entrants in the database wars have had a hard if not impossible
time of it. These vendors of 3D technology have a vastly different
specialization of knowledge. They can’t even begin to deliver full
database functionality and performance and shouldn’t be wasting
time reinventing the wheel, or in this case, reinventing database
But our 3D advocate over here, between sips of his sweet, rich,
and velvety beverage, has told us that the performance concerns
of scene management are paramount.
That’s true. But while scene management can’t directly operate out
of a traditional database residing on disk there’s nothing that
forbids managing scene data in an in-memory object database, don’t
you agree DB? In fact there have been several CAD programs that
have maintained their scene data in object databases.
I concur. The performance of contemporary object databases running
on contemporary processors should be up to all but the most outrageous
data management challenges – streaming terabyte size texture maps
assembled from satellite data possibly the only holdout. And object
databases are optimized for hierarchical data – a good match to
the hierarchical scene data structures used for 3D.
So you guys are telling me that we need yet another 3D standard,
to complement Fahrenheit and Java3D?
I think not. These are object-oriented toolkits with componentized
architectures. It should be possible with Fahrenheit, and more than
likely with Java3D, to trap calls to their scene managers and redirect
these calls to an in-memory object database.
So that applications built with these toolkits and utilizing an
object database in this fashion for scene management would accrue
all the benefits of true database applications, including multi-user
concurrency, support for transaction semantics, security, recovery,
scalability, and distributivity, to the extent these are supported
by the particular object database used.
Would this mean that applications using Fahrenheit and Java3D could
interoperate – by establishing a communication channel between their
object databases, or forming a distributed database between them?
To interoperate there would need to be a common schema in which
scene data from Fahrenheit, Java3D, and any other 3D toolkits could
reside – one that would be favorable to translations between them.
This schema should be very clearly defined along with all language
interfaces, whether SQL, OQL, or any others.
I think you’re making good progress here but have overlooked something.
A feature DB has mentioned is scalability. Our user’s anticipated
collaborative application might need to conceivably address 3D scenes
composed of millions or billions of objects, many of which must
be shared between users, perhaps tens of thousands of them.
I see where you’re going OM. You can’t expect to store all that
scene data redundantly in every 3D client application.
Right. You’ll want to distribute it, replicate it, perhaps across
many database servers, from many vendors. So you’ve got a heterogeneous
distributed database and …
Hold on OM. Before you start distributing that scene data all over
the world you had better recall the performance issue of supporting
smooth user navigation through worlds and manipulation of objects.
That scene data’s got to be local.
Not a problem. Treat those local, in-memory object databases more
like object caches. Keep only the scene data that’s immediately
…as well as prefetched scene data, based upon probabalistic and
heuristic look ahead functions. This has been used in the research
community for years to support the navigation of infinitely scalable
Precisely. You can have a hierarchy of databases, an n-tier architecture
with the databases closest to the 3D rendering engine the fastest
– the most performance oriented in structure and schema, then dropping
off in required performance the farther they get from the client.
On the back end you could have traditional relational databases
storing much of the data, easily accessible and modifiable using
proven relational techniques, even ad hoc queries which you’d never
run against an object database.
DB> Oh, sorry. We sort of got carried away. You had something
more to say?
Indeed. All this talk of distributed data between in-memory object
caches and relational and object relational databases presumes a
multi-paradigm federated database system that doesn’t exist. Hell,
there’s not even a single paradigm federated system that works outside
of a single vendors products. Isn’t that right DB?
Erh, yeah, that’s right. To maintain distributed data with near
full functionality one must subscribe to a single vendor’s product
line. So much for heterogeneity.
Not necessarily. I didn’t raise an objection without a solution
in mind. Distributed computing has produced the CORBA and DCOM standards,
the first an industry standard and the second a vendor standard.
With the OMG Object Transaction Service and a real time CORBA ORB,
database systems from multiple vendors could be fused into a federation
across which distributed transactions can be supported. So CORBA,
and perhaps eventually DCOM can serve as the plumbing to interconnect
the many database systems, from 3D client-side object cache to mid-tier
object relational databases and back end relational and even legacy
Wouldn’t you need a special wire protocol for communicating changes
between databases, object caches, and the like? Many of the same
updates and queries will be traveling around the network and you’ll
want to optimize your utilization of network resources.
As I understand it there are a number of protocols for such multi-user
collaboration built on top of Multicast IP. One or more of these
could serve the need for an efficient wire protocol on top of which
either CORBA’s IIOP or an application specific inter-ORB protocol
could be built.
Yes, and there’s always DIS as an option for that wire protocol,
though it has drawbacks. So what do you think DB, about supporting
3D data in this fashion?
Elegant. This would make 3D a true data type, supported in database
systems alongside alphanumeric data and text and media and spatial
data. More importantly, 3D data can be integrated with the other
types. In short, systems analysis and modeling could embrace 3D
data just as they have other data types, and applications developers
could utilize 3D data as an application warranted it. 3D would no
longer need to stand aside, supported only in dedicated systems
- it could be integral to applications in general.
Let’s see if I understand what you’ve come up with. Applications
written to Fahrenheit and Java3D can be modified and recompiled
to utilize an in-memory object database that could operate either
stand alone, or as an object cache if part of a distributed scene
database. The scene data would reside in the object database, or
the distributed database in the larger picture, using a schema sufficiently
general, extensible, and feature laden to represent both Fahrenheit
and Java3D scene data, and others should that be necessary. Connecting
each 3D application, thought of as a client, to the distributed,
heterogeneous database federation is real time CORBA, which also
serves as the interconnect fabric for the federation. And at levels
3 and 4 of the OSI model one or more special wire protocols would
be used to efficiently support mass transactional updates across
the federation and client object caches.
That sounds right except a recompile shouldn’t be required from
my present understanding of Fahrenheit.
Do you mean that in this scenario any application written to a supported
3D toolkit and API, like Fahrenheit and Java3D, can instantly become
a multiuser 3D application, offering persistence, recoverability,
scalability, and security?
That’s the idea, though minor modifications to applications would
be desirable for them to become multi-user aware and optimized to
fully exploit these features that are taken for granted in the database
Correct. I’d also like to point out non-3D applications can become
involved in this federation as well. In fact any user and application
that can address the federation, and that has the security privileges
to do so, can query and modify the 3D data. Agent applications that
monitor real time sensor feeds can modify the positions and attributes
of scene objects which are immediately reflected in visualizations
of the scene.
Yes. It becomes straightforward to pick, drag, and drop objects
from one scene in one display device to another, say from a personal
display to a multi-person dome. More than that, various 3D applications
can interoperate, within the limits of the scene data that they
can render of course.
Then a VizSim application, for example, that’s part of the federation
could dynamically receive a scene component or object from a CAD
program or a planning application. The applications would interoperate,
like 3D cut and paste, or copy and paste. As long, that is, as the
target client had the capability to render that scene component
Speaking from the object paradigm even this is not an obstacle.
Newly developed applications can be componentized along the lines
of Java applications where new content is accompanied by the code
to present and manage it. In other words, when a target application
needs to integrate content from a source application, besides performing
a distributed transaction of scene data in the federation, so that
the source data is fetched by the target application’s object cache,
the code for rendering/presenting/displaying/managing that new content
is fetched and dynamically linked into the source application. For
example, perhaps a sand table application in a command post is displaying
terrain data and weapon and vehicle systems, but then needs to fuse
a CAD model of an installation. The CAD scene data is fetched by
the sand table application as well as the rendering/display code
needed for its presentation.
Excellent. Applications can self modify at run time to fuse and
present the required data, what ever it is, as well as support user
interaction with the data. It would be even better if 3D data could
be optimized for the target display, using culling, tessellation,
simplification or decimation to tune the data to the operational
parameters of the display.
With an n-tier architecture, using special purpose servers that
perform those functions, scene data can be optimized for the target
display without needing to previously store it in all the formats
required by all the differing displays’ capabilities.
From what you’re saying you’ll need to know more than the target
client capabilities – you’ll want to track the network connectivity
and its performance characteristics dynamically, so that you can
route the data with an appropriate quality of service. There will
need to be databases and knowledge bases that maintain both configuration
information as well as policies and dynamic information on the state
of the network and devices.
Sure, and at times you’ll modify the 3D data, in terms of its complexity
and volume, in order to fit the available channel or channels. A
negotiation between the clients, the federation of data servers,
the special purpose servers that perform simplification and optimization,
and the network itself. You’ll want the network to become an intelligent
object fabric, an ioFABRIC for short. Probably using Directory Enabled
Networks but commanded and controlled from the application layer.
Sounds like we’re back to contemporary application development principles,
now fully embracing 3D data in addition to traditional data types.
And in time, as processor and memory systems continue their ever
increasing climb up the performance curve, it will even be possible
for the client applications to be written directly with transactional
semantics using a standard data language.
A standard data language extended to support all the required constructs
of 3D data.
Yes. Perhaps extensions to the ODMG’s Object Query Language and
Object Definition Language. And 3D extensions to ANSI’s upcoming
SQL3. And in time the object data languages are expected to converge
into SQL3, a single data language standard, which by our recent
thinking, sitting around this table, could well provide complete
support for real-time 3D data and development of applications with
Would that be SQL3D then?
Touché! Although it’s possible that no modifications or extensions
would be needed as these languages may already have sufficient representational
So where do we begin to actually get SQL3D standardized and readily
deployed in products?
and OM in unison> We’d better order another round of Turkish
coffees before we handle that one.