Index: [thread] [date] [subject] [author]
  From: Jon M. Taylor <taylorj@ecs.csus.edu>
  To  : ggi-develop@eskimo.com
  Date: Thu, 20 Aug 1998 18:51:14 -0700 (PDT)

Re: LibGGI3D RFC

On Fri, 21 Aug 1998, Rodolphe Ortalo wrote:

> 
> 
> On Thu, 20 Aug 1998, teunis wrote:
> > Does this message help any?  Are we communicating or arguing about
> > completely different topics thinking they're the same? :)
> 
> We are communicating, don't worry... ;-) In fact, your 3d_tridata
> example showed me how we could pass more things through a single
> function (hence clarifying the objective of using display targets...)

	That is what I am after.  Give the target the necessary info and
let *it* make the rendering decisions.  That is why LibGGI3D will be quite
small - almost all of the 3D "guts" will reside in the display targets. 
All we need to do is concentrate on finding the cleanest, simplest, most
flexible way to pass the necessary information around.  All interface, no
implementation - exactly the way a good API should be.

> You know, even if I play the devil's advocate, I would really _like_
> to be able to use ggi3dDrawTriangle _only, and then have some
> setup function where I can (possibly dynamically) reconfigure the
> engine so that it uses gouraud, textured, etc.

	That could be done instead of using pluggable shaders, but you'd
end up with 1000000 hyper-specialized Draw3dTriangleWithMultitexture
BlendingAndGoraudShadingToVoodoo type functions, even though they'd all be
called ggi3dDrawTriangle().  Every possible permutation of rendering,
shading, texturing, lighting, etc would need to be handled individually.

	See, the problem is that rendering and shading don't always go
together.  If I have a z-buffer rendering ggi3DDrawTriangle() function, I
can use that same z-buffer rendering code to do flatshading, phong
shading, or goraud shading (and other shaders as well) just by swapping
out the shader function.  If, OTOH, I happen to be targeting Glide, I can
overload the ggi3DDrawTriangle() function to use hardware shading if
present (n which case the shader function would not be used) or call out
to another soft-renderer if the hardware couldn't do
SHADE_RADIOSITY_TEXTURE or whatever.

	This lets you cleanly abstract the rendering and shading methods
from each other, and from hardware and software versions of rendering and
shading methods.  The DrawTriangle() code and the target code can pull
together various prebuilt rendering and shading functions as needed (with
some custom glue code if necessary) to create a complete shading and
rendering solution appropriate to the given hardware, shading method(s),
rendering method(s) and whatever other info is pertinent.

	Now, this is not a 100% flawless solution.  The modularity of the
system does limit optimization potentials to some degree.  A 100%
custom-built code path from DrawTriangle to the KGI driver (which is what
it sounded like you were suggesting) *is* the absolute best way to go. 
But it would also be a HUGE coding job, generate bloat, and take forever. 
That kind of hyper-optimization is IMHO better left for later, after we
have a working base.

	My pluggable shaders scheme should work Good Enough(tm) in most
situations, especially when we are just getting started (i.e. now) and
getting lots of working code out there quickly is more important than
doing things 100% optimally from the very beginning.  As LibGGI3D matures,
people will undoubtedly start to say things like "you know, using z-buffer
rendering with a goraud shader plug-in just isn't as fast with ABC video
chipsets as if we had a specialized ABC-goraud rendering function.  Let's
write one and put it in the ABC display target".

	Thus, over time the generalized, modular system will get replaced
by more and more specific-case rendering/shading DrawTriangle() functions. 
IMHO this sort of evolutionary approach is the way to go, and LibGGI makes
it quite easy to do.  Get it working first for the general case(s), THEN
optimize for specific cases.  Often, implementing the general case
solutions will enable you to get a clearer picture of the nature of the
specific-case optimization needs, and you might end up being able to
implement the specific-case optimizations in a better way than if you had
taken that path from the beginning.

	This relates back to the "why don't you just use Mesa"
discussions.  Just as I am not saying specific-case rendering functions
are wrong, I am not saying that accelerating Mesa on a specific-case basis
is wrong.  They are not wrong, but they *are* specific-case optimizations,
and those should ALWAYS come after the general case(s) have been taken
care of.  IMHO. 

> But still, I want to see the details...

	Hope this helps.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed


Index: [thread] [date] [subject] [author]