Index: [thread] [date] [subject] [author]
  From: Jon M. Taylor <taylorj@ecs.csus.edu>
  To  : GGI mailing list <ggi-develop@eskimo.com>
  Date: Tue, 18 Aug 1998 19:54:14 -0700 (PDT)

LibGGI3D RFC

	OK, here's my first stab at nailing down my LibGGI3D system
proposal.  Feedback is appreciated. 

<--- CUT HERE --->

I. The purpose of having a LibGGI3D

	3D computer graphics is a very large and complex topic.  When 3D 
video hardware is also considered, the topic becomes even more complex.  
The only existing 3D API that is universally(?) considered to be capable 
of encompassing all of this is OpenGL.  Therefore, most existing 
proposals for 3D-in-GGI have centered around Mesa, the existing open 
source implementation of the OpenGL API.

	There are, however, some downsides to using Mesa/OpenGL:

* It is huge.  OpenGL is a full object representation system, rendering
pipeline and display management system.  Not all of these features are
needed by all users.  For example, a widget set that used phong shading to
give its widgets a nice 3D look might want to be able to accelerate that
shading in hardware if it could do so, but if OpenGL is the only way to do
3D it will have to base the whole widget set on OpenGL.

* It is slow(er).  OpenGL is great for compatibility, but sticking it
between the app and the hardware does entail some level of performance
loss which might substantially impact useability.  An example might be a
port of Windows' Direct3D API to GGI.  Since many games run on Direct3D, a
port of D3D would either have to be built on top of Mesa (which would
cause a performance drop) or "go it alone" like the WINE DirectX layer is
having to do.  It would be nice to have a much smaller 3D API which would
serve as a nice middle ground between letting a huge API do everything for
you and coding directly to KGI drivers for maximum speed.

* It is too inflexible in places.  OpenGL 1.1's inability to expose the
geometry part of the graphics rendering pipeline means that video hardware
which can accelerate some geometry itself cannot just plug in to that part
of OpenGL and say "use me to do this stuff".  To accomplish this, the
*entire* OpenGL API must be rewritten for that piece of video hardware. 
This is why it took so many video card makers so long to come up with
decent OpenGL drivers under Windows - either they used the stock Mini
Client Drivers (MCDs) which let them accelerate rendering only, or they
had to create their own customized OpenGL implementation that accounted
for the precise nature of the video hardware all up and down the rendering
pipeline.

	OpenGL is quite good, and for most purposes it gets the job done. 
But not for every purpose.  For those who need a few simple 3D features,
those who want to be able to develop a 3D KGI driver without having to
deal with the whole OpenGL API, for those who just want to render a
gouraud shaded, texture mapped triangle, we need a simpler API.  That API 
is LibGGI3D.

II. Overall design

	So the keyword here is "simple".  If OpenGL is too complex, then
let's look at what parts of OpenGL we *don't* need: 

* 3D world modeling.  We don't need it, and it can be implemented on top
of LibGGI3D if needed.  Our job is to do 3D drawing, not model worlds. 

* Complex object representational schemes (polyhedra, surfaces,
constructive solid geometry).  Again, this is a job for a higher-level
API.  All of that stuff is tesselated down before rendering anyway, so
LibGGI3D can be designed with the assumption that such tesellation has
already been done.

* Lighting, shading, texture mapping, or any other such algorithms.  All
of these are methods for determining the color of a pixel based on certain
data.  If we give up world-building (see above), we no longer have the
necessary information to shade pixels properly.  We need to provide a
general way to shade pixels, but not make any assumptions about how that
shading should be done.  That, again, is a job for higher API levels.


	So, what are we left with?  Essentially, we are left with two
basic concepts: 3D drawing and 3D shading.  That is all we need in our
API. But of course there are features we will need to have in order to 
implement these primitives cleanly and with maximum flexibility.  So now 
we get to the specifics of the LibGGI3D API design.

	The first feature is the notion of a camera.  This is just two
sets of 3D coordinates which specify a viewpoint and a direction.  This is
necessary to map the 3D coordinate space onto the 2D coordinate space of
the display.

	Next is the notion of triangle drawing.  LibGGI3D will have only
one drawing function, DrawTriangle().  The triangle is the fundamental
object of LibGGI3D.  All other 3D shapes can be tesellated into triangles,
which are the prototypical polygon.

	Next is the notion of pluggable shaders.  A triangle is drawn by
setting a bunch of pixels.  What shade those pixels are can be determined
in a huge number of different ways.  LibGGI cannot possibly avoid bloat if
it has to know about all those ways to shade triangles.  Therefore,
DrawTriangle can take a pointer to a shader table entry, which contains a
pointer to a shader function.  Thus, the code using LibGGI3D can implement
arbitrary shaders and LibGGI3D doesn't need to know about the gory
details.  Both prebuilt shader functions (SHADE_PHONG,
SHADE_GORAUD_TEXURED, SHADE_FLAT, etc) and user-defined shader functions
go in this table.  The prebuilt shaders allow DrawTriangle() to use
hardware shading if it is present, or fall back to software shading if it
is not.

	So, all LibGGI3D does is draw shaded triangles in a particular
coordinate system.  End of story.  With these simple tools, almost any
type of 3D hardware/software acceleration combo can be used and used
fairly well.  Many common 3D video cards are based on triangles, and for
those that are not (infinite planes, polygons) we can still use triangles
or collections of triangles to represent polygons or polyhedra.

III. Design specifics.

* struct camera { float x1, y1, z1, x2, y2, z2 }. Every LibGGI3D display
will have a camera struct attached to it.  A generic hook for arbitrary
display data should also be present - the shaders might use it.  void
*private_data. 

* int DrawTriangle (float x1, y1, x2, y2, x3, y3; shader_index shader; void
*shader_data) is the core drawing function.

* int DrawTriangleSet(triangle_set *triset; shader_index shader; void
*shader_data) draws set of triangles, which can be used to draw polyhedra
for hardware like infinite planes.  If future hardware can draw surfaces
directly, triangle-based surface patches can be used with this function as
well. 

* shader_index register_shader(shader_func *myshader) registers a custom
shader function with the system and returns an index into the shader
table.

* int unregister_shader(shader_index myshaderindex) unregisters a shader. 
On GGI_exit(), all registered shaders are forcibly unregistered.

IV. Other stuff (FAQs)

Q: What about z-buffering?  You said you were going to make that part of 
the API.

A: I changed my mind.  Once you start worrying about buffers, their
dimensions, their layout, their coordinate system, etc etc you open up a
whole can of worms.  If people want these features, they can either write
a DirectBuffer equivalent or draw to a secondary depth buffer.  LibGGI3D
is for drawing only.  Extra stuff should have its own API. 

Q: Why are coordinates floating point numbers?

A: Because it aloows for finer control of rasterization, perspective
transforms and can easily be quantized into integer coordinate systems
when needed.

Q: Why do you use that shader table thing?  Why not just pass a pointer to
a shader function directly to DrawTriangle()?

A: Because the drawing functions need to be able to know what type of
shading is being used if they are to be able to use hardware shading.  So,
you need a fixed set of shader types that are defined and come with the
library.  Since the system has to know which shader functions go with
which SHADER_* constants, it makes sense to use the constants as an index
to a table.  And if you are treating the stock shaders this way, you might
as well use the same system for the custom shaders.

Q: If I draw a triangle close to the camera and then draw another one much
farther away such that the first, closer triangle should occlude the
second, further one, the second one instead is drawn over the top of the
first one!  Isn't that wrong behavior?

A: Nope.  LibGGI3D does not concern itself with hidden surface removal. 
None of the triangles are "aware" that the others exist.  Remember,
there's no 3D world here.  If you want a 3D world, write LibGGI3Dworld or
use OpenGL.


********

	That's all for now.  Take a look and let me know what you think. 

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed

Index: [thread] [date] [subject] [author]