Index: [thread] [date] [subject] [author]
  From: Olivier Galibert <galibert@pobox.com>
  To  : libggi3d-list@eskimo.com
  Date: Sun, 23 Aug 1998 05:32:35 +0200

Reality check

Somewhat, the libggi3d RFC sounds completely wrong to me.

I see ggi (and siblings) as a way to give:
- security
- portability
- performance

by providing an abstract,  but close-to-the-hardware,  interface which
implementation you can specialize  for each video card.  Everything in
the API   that can't be   handled directly by  the  hardware has to be
"emulated" by  the library, but  even for these the implementation can
be optimized for  the   card.  Which  is  better than,  for  instance,
targetting a generic VGA framebuffer.

Now, when I see  the justification for the  libggi3d as being  a "nice
middle ground  between letting a  huge  API do  everything for you and
coding directly to  KGI drivers  for maximum  speed" I  think this has
nothing to do in the GGI project  per se.  Whether  or not we need yet
another 3D API is open to debate,  but not the  subject at hand.  What
libggi3d   *should* be is an   abstract interface to the possibilities
current video  cards have relatively  to 3D rendering.   Nothing more,
and nothing less.  This  can be splitted in  more than one  library if
separating  rendering,   texture  management, geometry  management  et
al. seems wise, but for  each library call a  proponent should be able
to show an existing video card that implements it in hardware.  If you
don't do that, you'll end up with yet another X.

On the other hand,  coming up with an  interface that does not support
half  of what decent  hardware  provides in its  area  of operation is
futile.  It simply won't be used outside  of toy or demo applications.
If a KGI driver writer sees that Mesa has to go directly to the kernel
interface to  provide significant performance  it will optimize for it
and *not* for libggi3d.

My NSHO[1]:

The current  proposal is  doomed.    Not because  specifically of  its
contents, but of the way it has been done.  This gives things like "no
clipping" or "no z-buffering".  If you look at the glide documentation
you see that you can't do z-buffering outside of a triangle rendering.
This means  that any  additional library providing  this will  have to
trash libggi3d and redo everything.

What should be  done if  to find as   many hardware docs  for existing
hardware  as possible, choose an  "area  of expertise", say rendering,
doing the list of what is supported in hardware  in this area and then
define an abstract library  that encompasses all  that is in the list.
And if anything  supported in hardware  is "splitted" by the  library,
for instance having "graphic  rendering" and "rendering with z-buffer"
separated while it  is the same command in  hardware, then the library
is broken.

The glide looks like a good point of view on the voodoo chips.  Matrox
millenium and  mystique docs are  easy to get  (but the G200 one isn't
available yet).  OpenGL is a direct peek in what SGI hardware supports
and what a  lot of new cards  are going to support more  or less.  The
online i740 documentation is enough to know what the card can do.


This  is  not to  be  taken as a personal   attack.  I'd just  like to
prevent you for wasting your   time.  And BTW, if  my  view of ggi  is
wrong, then please tell me so.  I'd hate to waste *my* time.

  OG.

[1] Not So Humble Opinion

Index: [thread] [date] [subject] [author]