Index: [thread] [date] [subject] [author]
  From: Jon M. Taylor <taylorj@ecs.csus.edu>
  To  : ggi-develop@eskimo.com
  Date: Tue, 18 Aug 1998 15:52:49 -0700 (PDT)

Re: 3d

On Sun, 16 Aug 1998, Adrian Ratnapala wrote:

> Christian Reiniger wrote:
> 
> > Jon M. Taylor wrote:
> >
> >
> > Perhaps he means OpenGL should be the only user level API, i.e. it wouldn't
> > be a good idea to try to replace OpenGL with some other API.
> >
> > I for my part would like to see some minimalistic hardware abstraction lib
> > (libggi3d) with Mesa/OpenGL sitting on top of it.
> >
> > >       But if you *also* start from nothing, you can create a small
> > >LibGGI3D that would do primitive 3D rendering/shading, and that is all
> > >some people want.
> 
> Yes.  Also I wouldn't mind too much if that lib had slightly different interfaces for
> differentcards (obviously you would want as much comonality as possible).  

	Doing this right is a bit tricky.  You can try to encapsulate
various types of framebuffer layouts with a generalized Direct[x]Buffer
scheme, but when you get to card-specific representation of objects and
rendering you are stepping into a minefield. 

	IMHO, the proper way to handle that stuff is to handle everything
as triangles (yes, I was wrong about the polygons, more on that later),
use pluggabler shaders/renderers that may or may not call out to the
hardware.  Your API abstraction can be as high-level as you want (OpenGL
is the logical end-point of this), and you render down through simpler and
simpler levels until you hit a point where you can optimally render to the
hardware (the KGI driver, really).

	Of course, rendering down from NURBS to bezier surface patches
based on polygons to bezier surface patches based on triangles to
triangles before sending the triangles to the hardware would be slow as
hell.  Each layer could choose to do its own rendering if going through a
lower layer (like LibGGI3D) would be too expensive.  Mesa could render
directly to the KGI drivers, or it could render to LibGGI3D, or it could
render to LibGGI3D+LibSurfaces (or whatever).

	The point is that you either deal with hardware specific features
by encapsulating them in a higher-level API, or you hit the KGI driver
directly.  But you should not have "variable APIs".  If a given API lacks
support for a given feature, either extend it for good or bypass it
altogether.  Since everything is so mix 'n match in LibGGI anyway, it
doesn't make sense to have extensible APIs.  An extensible API *system*,
yes.  But the APIs themselves should be standardized.

	And about the triangle vs. polygon thing: I was wrong, LibGGI
*should* be based on triangles.  I was thinking that the loss of polygon
info would cause problems for the pluggable shaders, but upon further
reflection I realized that isn't the case.  I was also thinking that for
surface patches you'd have to coalesce the triangles back into polygons,
but that also isn't correct (meshes can be based on triangles).  I thought
the same thing about contructive solid geometry (infinite planes can do
triangles easily), which was *also* wrong. Triangles are the way to go. 

> The idea
> is for it to sit
> between high level libs like Mesa as well as other things (eg. a D3D clone or
> whatever).

	That is one of its features.  It is like a Mini Client Driver for
windows - it takes care of a lot of the low-level representational
gruntwork at the cost of some flexibility.  It would be A **LOT** easier
to write LibGGI3D accel support for a given video card and then write ONE
LibGGI3D target for Mesa or D3D or Randerman or POVray or a VRML engine or
whatever.  But it would also be less efficient than writing individual
driver libraries for each card for *each app*, which is necessary to get
maximum optimization. 

	That last is a LOT of work.  If we give people a reasonable,
minimalistic and yet flexible LibGGI3D, 3D support will be able to get "up
and running" much quicker.  As time goes by, people will create optimized
direct-to-hardware LibGGI helper libs, but in the meantime they will have
*something*.  Right now, everyone is wanting to target Mesa to the KGI
drivers themselves through LibGGI help libs.  Do you guys have any idea
how much work that is going to be?  And when you are through with all that
work, you have supported *one* API, OpenGL.  Now everyone will have to do
accelerated 3D through OpenGL, which will be a performance hit for e.g. 
D3D emulation.

	With the simple LibGGI3D scheme I outlined elsewhere in this
thread (and which I need to formally write up), I could whip up support
for a Virge card fairly quickly.  I could also do so for Glide - hell,
Glide already mostly *is* LibGGI3D |->.  Maybe I'll give this a shot with
Glide sometime soon.  No need to start out with anything but flat shaded
triangles - the rest can be added later. 

> If you don't want to do this, then such libs will have to use a the raw kernel
> interface, I
> suppose this would be OK provided this interface was stable enough.  (Although it
> does
> limit portability).

	At a certain point, you have to bite the bullet and hit the KGI
driver if you want to "do your own thing".  But most people don't need to
do that sort of thing, so we allow for APIs via LibGGI too.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed

Index: [thread] [date] [subject] [author]