Index: [thread] [date] [subject] [author]
  From: Kostya Vasilyev <kostik@verio.com>
  To  : ggi-develop@eskimo.com
  Date: Tue, 18 Aug 1998 21:13:45 -0700

RE: LibGGI3D RFC

Since this is an RFC, I'm going to "C" ;-)

BTW, who can set up a separate 3D mailing list? I do not have the resources
(not running my own mail server)... any volunteers?

: -----Original Message-----
: From: Jon M. Taylor [mailto:taylorj@ecs.csus.edu]
: Sent: Tuesday, August 18, 1998 7:54 PM
: To: GGI mailing list
: Subject: LibGGI3D RFC
:
:
: 	OK, here's my first stab at nailing down my LibGGI3D system
: proposal.  Feedback is appreciated.
:
: <--- CUT HERE --->
:
: I. The purpose of having a LibGGI3D
:
: 	3D computer graphics is a very large and complex topic.  When 3D
: video hardware is also considered, the topic becomes even more complex.
: The only existing 3D API that is universally(?) considered to be capable
: of encompassing all of this is OpenGL.  Therefore, most existing
: proposals for 3D-in-GGI have centered around Mesa, the existing open
: source implementation of the OpenGL API.
:
: 	There are, however, some downsides to using Mesa/OpenGL:
:
: * It is huge.  OpenGL is a full object representation system, rendering
: pipeline and display management system.  Not all of these features are
: needed by all users.  For example, a widget set that used phong shading to
: give its widgets a nice 3D look might want to be able to accelerate that
: shading in hardware if it could do so, but if OpenGL is the only way to do
: 3D it will have to base the whole widget set on OpenGL.
:
: * It is slow(er).  OpenGL is great for compatibility, but sticking it
: between the app and the hardware does entail some level of performance
: loss which might substantially impact useability.  An example might be a
: port of Windows' Direct3D API to GGI.  Since many games run on Direct3D, a
: port of D3D would either have to be built on top of Mesa (which would
: cause a performance drop) or "go it alone" like the WINE DirectX layer is
: having to do.  It would be nice to have a much smaller 3D API which would
: serve as a nice middle ground between letting a huge API do everything for
: you and coding directly to KGI drivers for maximum speed.

OpenGL (as an API, not its implementation) is not necessarily slower than,
for example, Direct3D, when it comes to features they have in common (I can
only think of only one exception: texture memory management...). However, a
full OpenGL implementation is very complex, and it's harder to optimize all
code paths. Direct3D, being a simplier API, is easier to optimize, since it
results in fewer possible usage patterns from the application and therefore
fewer code paths inside OpenGL itself.

However, OpenGL can get slower than Direct3D is where D3D has features that
OpenGL lacks (I know, there are extensions, but let's not consider them for
the sake of this point). One such example is multitexturing. If the HW
supports it, but the driver does not support the SGI_multitexture extension,
the application will have to make two separate rendering passes.

: * It is too inflexible in places.  OpenGL 1.1's inability to expose the
: geometry part of the graphics rendering pipeline means that video hardware
: which can accelerate some geometry itself cannot just plug in to that part
: of OpenGL and say "use me to do this stuff".  To accomplish this, the
: *entire* OpenGL API must be rewritten for that piece of video hardware.

Let's distinguish between OpenGL 1.1 specification and the details (MCD/ICD)
of OpenGL implementation on Windows 9X and Windows NT. The OpenGL API
specification allows for full geometry acceleration (at one time, I did some
work with an OpenGL board from Evans & Sutherland, which does complete
geometry acceleration).

: This is why it took so many video card makers so long to come up with
: decent OpenGL drivers under Windows - either they used the stock Mini
: Client Drivers (MCDs) which let them accelerate rendering only, or they
: had to create their own customized OpenGL implementation that accounted
: for the precise nature of the video hardware all up and down the rendering
: pipeline.

OpenGL _is_ complex, more difficult to optimize (see above), and let's not
forget another (non-technical) issue: Microsoft promised IHV an MCD kit for
Windows 95, then changed their minds.

: 	OpenGL is quite good, and for most purposes it gets the job done.
: But not for every purpose.  For those who need a few simple 3D features,
: those who want to be able to develop a 3D KGI driver without having to
: deal with the whole OpenGL API, for those who just want to render a
: gouraud shaded, texture mapped triangle, we need a simpler API.  That API
: is LibGGI3D.
:
: II. Overall design
:
: 	So the keyword here is "simple".  If OpenGL is too complex, then
: let's look at what parts of OpenGL we *don't* need:
:
: * 3D world modeling.  We don't need it, and it can be implemented on top
: of LibGGI3D if needed.  Our job is to do 3D drawing, not model worlds.
:
: * Complex object representational schemes (polyhedra, surfaces,
: constructive solid geometry).  Again, this is a job for a higher-level
: API.  All of that stuff is tesselated down before rendering anyway, so
: LibGGI3D can be designed with the assumption that such tesellation has
: already been done.

Agreed.

: * Lighting, shading, texture mapping, or any other such algorithms.  All
: of these are methods for determining the color of a pixel based on certain
: data.  If we give up world-building (see above), we no longer have the
: necessary information to shade pixels properly.  We need to provide a
: general way to shade pixels, but not make any assumptions about how that
: shading should be done.  That, again, is a job for higher API levels.

Leaving texturing out of a 3D API is unacceptable. All modern 3D hardware
has this capabilty, and practically all games and applications make use of
it.

Futhermore, more and more game engines base their rendering on
multitexturing (lighting maps is one old and boring example, but there are
also glow maps, specular maps, embossing maps, etc.) Much of current and
soon-to-be-released hardware (Voodoo2, Permedia3, nVidia TnT, Matrox G200,
S3 Savage, etc) have multitexturing built in. Not suporting this would
significantly slow down the application using multitexturing.

: 	So, what are we left with?  Essentially, we are left with two
: basic concepts: 3D drawing and 3D shading.  That is all we need in our
: API. But of course there are features we will need to have in order to
: implement these primitives cleanly and with maximum flexibility.  So now
: we get to the specifics of the LibGGI3D API design.
:
: 	The first feature is the notion of a camera.  This is just two
: sets of 3D coordinates which specify a viewpoint and a direction.  This is
: necessary to map the 3D coordinate space onto the 2D coordinate space of
: the display.

Two (x,y,z) triplets are not enough.

For starters, one triplet does not uniquely specify orientation (you need
four numbers for this); it's actually better to use three mutually
orthogonal vectors for orientation and treat them as [to, right, and up]
vectors (this way the application can also choose right-handed or
left-handed coordinate system).

Second, you also need field of view, and a convention about what to do about
non-rectangular viewports (is fov same for vertical and horizontal
directions? is it at least the specified value? at most?).

Finally, you also need at least near clipping plane so you can do
perspective projection (and a far clipping plne would be nice, too ;-)

: 	Next is the notion of triangle drawing.  LibGGI3D will have only
: one drawing function, DrawTriangle().  The triangle is the fundamental
: object of LibGGI3D.  All other 3D shapes can be tesellated into triangles,
: which are the prototypical polygon.

The overhead of calling a function once per triangle, especially with
parameters passed by value, can be too much in a high-performance game
engine (yes, I keep referring to game engines, but that's what I know best).

It is best to batch up triangles and call a single function that processes
the whole batch in one shot.

Providing indexed triangle function is also very important, since it can cut
transformation and projection time (which is about 80% of rendering time in
the code I have written) by a factor of 2 or more, depending on the topology
of what the triangles actually represent.

: 	Next is the notion of pluggable shaders.  A triangle is drawn by
: setting a bunch of pixels.  What shade those pixels are can be determined
: in a huge number of different ways.  LibGGI cannot possibly avoid bloat if
: it has to know about all those ways to shade triangles.  Therefore,
: DrawTriangle can take a pointer to a shader table entry, which contains a
: pointer to a shader function.  Thus, the code using LibGGI3D can implement
: arbitrary shaders and LibGGI3D doesn't need to know about the gory
: details.  Both prebuilt shader functions (SHADE_PHONG,
: SHADE_GORAUD_TEXURED, SHADE_FLAT, etc) and user-defined shader functions
: go in this table.  The prebuilt shaders allow DrawTriangle() to use
: hardware shading if it is present, or fall back to software shading if it
: is not.

Since a major goal of LibGGI3D seems to be hardware support, I'd like to
suggest that LibGGI3D only support primitive shading built into hardware:
flat and gouraud shading, texturing (incl. muti-) and _alpha blending_.

Most other shaders can be built on top of these features, combined with
adaptive subdivision where necessary.

Allowing shaders complete control over each pixel of the triangle would,
certainly, allow one to shade triangles in ways not supported by the card.
But doing this will alomost certainly put LibGGi3D into offline rendering
category (i.e. submit rendering job and go home to sleep), and there is
already RenderMan ;-)

: 	So, all LibGGI3D does is draw shaded triangles in a particular
: coordinate system.  End of story.  With these simple tools, almost any
: type of 3D hardware/software acceleration combo can be used and used
: fairly well.  Many common 3D video cards are based on triangles, and for
: those that are not (infinite planes, polygons) we can still use triangles
: or collections of triangles to represent polygons or polyhedra.

Let's talk about what coordinate space the input triangles are in.

Given your proposed API: Should the application wish to move its triangles
about (say, it wants to render an animated character!), it will need to
update _all_ vertex positions by applying a matrix (representing
object-to-world transformation). This is very expensive (36 clocks per
vertex, not counting load/store _or_ cache misses).

Since most cards at this point only have triangle rasterization capabilities
(and not geometry acceleration), LibGGI3D will have to transform the
vertices from their space into camera space, by applying a matrix multiply
to each vertex (and then projecting it).

Having an function in LibGGI3D that specifies the local-to-world transform
for all subsequent triangles will relive the application of having to do it,
and result in better performance because the local-to-world and
world-to-camera matrices can be concatenated inside LibGGI3D.

When running on a card that has geometry acceleration, LibGGI3D can tell the
card when the local-to-camera matrix changes.

: III. Design specifics.
:
: * struct camera { float x1, y1, z1, x2, y2, z2 }. Every LibGGI3D display
: will have a camera struct attached to it.  A generic hook for arbitrary
: display data should also be present - the shaders might use it.  void
: *private_data.
:
: * int DrawTriangle (float x1, y1, x2, y2, x3, y3; shader_index
: shader; void
: *shader_data) is the core drawing function.
:
: * int DrawTriangleSet(triangle_set *triset; shader_index shader; void
: *shader_data) draws set of triangles, which can be used to draw polyhedra
: for hardware like infinite planes.  If future hardware can draw surfaces
: directly, triangle-based surface patches can be used with this function as
: well.
:
: * shader_index register_shader(shader_func *myshader) registers a custom
: shader function with the system and returns an index into the shader
: table.
:
: * int unregister_shader(shader_index myshaderindex) unregisters a shader.
: On GGI_exit(), all registered shaders are forcibly unregistered.
:
: IV. Other stuff (FAQs)
:
: Q: What about z-buffering?  You said you were going to make that part of
: the API.
:
: A: I changed my mind.  Once you start worrying about buffers, their
: dimensions, their layout, their coordinate system, etc etc you open up a
: whole can of worms.  If people want these features, they can either write
: a DirectBuffer equivalent or draw to a secondary depth buffer.  LibGGI3D
: is for drawing only.  Extra stuff should have its own API.

You are joking, about not having Z-buffering, right?

If you want LibGGI3D to be useful for more than Phong shaded window
captions, it needs to support Z-buffering.

: Q: Why are coordinates floating point numbers?
:
: A: Because it aloows for finer control of rasterization, perspective
: transforms and can easily be quantized into integer coordinate systems
: when needed.
:
: Q: Why do you use that shader table thing?  Why not just pass a pointer to
: a shader function directly to DrawTriangle()?
:
: A: Because the drawing functions need to be able to know what type of
: shading is being used if they are to be able to use hardware shading.  So,
: you need a fixed set of shader types that are defined and come with the
: library.  Since the system has to know which shader functions go with
: which SHADER_* constants, it makes sense to use the constants as an index
: to a table.  And if you are treating the stock shaders this way, you might
: as well use the same system for the custom shaders.

You could make shaders be pointers to structs of function pointers, one of
them would check the HW caps, and another render:

typedef struct Shader
{
    bool (*CanDoOnThisHardware)(struct Shader* self, /*whatever*/);
    void (*YesAndGoAheadAndDoIt)((struct Shader* self, Triangle* tri,
/*whatever*/);
} Shader;

: Q: If I draw a triangle close to the camera and then draw another one much
: farther away such that the first, closer triangle should occlude the
: second, further one, the second one instead is drawn over the top of the
: first one!  Isn't that wrong behavior?
:
: A: Nope.  LibGGI3D does not concern itself with hidden surface removal.
: None of the triangles are "aware" that the others exist.  Remember,
: there's no 3D world here.  If you want a 3D world, write LibGGI3Dworld or
: use OpenGL.

Hidden surface removal is a large and complex topic, but given the current
state of hardware/sofware features/performance, the best approach is to do
rough visibility calculation in host CPU (BSPs and portals are two possible
schemes), but do not try to get rid of all overdraw; then let the hardware
sort out the rest. This requires some sort of hidden surface removal in
hardware; z-buffering is one.

: ********
:
: 	That's all for now.  Take a look and let me know what you think.

That's all my comments for now...

~kostik

Index: [thread] [date] [subject] [author]