Index: [thread] [date] [subject] [author]
  From: Jon M. Taylor <taylorj@ecs.csus.edu>
  To  : ggi-develop@eskimo.com
  Date: Wed, 19 Aug 1998 15:17:29 -0700 (PDT)

Re: LibGGI3D RFC

On Wed, 19 Aug 1998, Rodolphe Ortalo wrote:

> [I don't quote anything from former posts, because I guess
>  everyone read them, and they are long...]
> 
> Well, first, let's say I'm glad of the way this RFC was written and
> commented. I'm also glad that Jon did this first try... (In other words,
> thanks for the work ;-)
> 
> 
> I tend to agree with the overall design of Jon. And I also tend to
> agree with the comments Scott McNab made.
> 
> To summurize my views (with Jon's proposal as the basis):
>  - I don't really feel the need for having a notion of camera in
> libggi3d. I would be happy with simple 2D parameters in libggi3d
> (in fact, projected 2D parms, plus 1/Z, U,V, etc...)

	I'm willing to consider having a projected 2D (2.5D, I call it)
coordinate system *option*.  But somepeople still need the true 3D - some
hardware will need it.  As for U, V texture coordinates - that is strictly
a texture-shader issue.

>  - I think that Z-buffer wouldn't be so annoying to manage.
> Jon does not seem to have the same view, so, could you point the things
> that worry you on this point Jon ?

	It would be annoying to *rely on*.  Of course it should be
*possible* to render into a depth buffer (z-buffer is not a type of
buffer, it is a rasterization algorithm).  This depth buffer could either
be a secondary buffer which would be used to render to the hardware on a
flush (dump to hardware depth buffer, rasterize to 2D buffer, etc) or a
hardware DirectDepthBuffer which could be drawn into directly.  Same as
LibGGI2D - either you render to an ofscreen buffer, you render to a
DirectBuffer, or you just render shapes to a display target and let it
take care of things.

	Af for whether this depth buffer stuff should be part of LibGGI3D
or in a LibGGI3D extension or whatever - I don't know the LibGGI system
well enough to say.  All I know is that it should be an *option*.  If
someone who knows LibGGI(2D) better than I do would like to comment,
please do so. 

>  - I also think alpha management could be very useful and not so
> difficult... (Same comment as previous point.)

	Same reply as previous point |->

>  - I want clipping in libggi3d also.

	Clipping is only an issue with true 3D, not 2.5D.  So we do it the
same as LibGGI2D, with the coordinate system clipping settings, etc tied
to the visual.

>  - I am not yet completly clear of how you want to manage textures ?

	I don't.  Texturing is a shader issue.  The code that uses 
LibGGI3D with texturing shaders needs to do the texture management.  

> What about texture cache management also ? You associate that to
> a particular shader, that's the idea ? 

	Yep.  If the hardware has a texture caching system, the target
will handle it.  If not, it can build a software cache or let the higher
layers do their own texture caching and use a specialized texturing shader
that knows how to work with that caching.  Whatever. 

> BTW, I tend to like that idea
> of registering & using shaders like you proposed.

	I learned this last spring in my 3D graphics class.  We used a
prebuilt system called TUGS (The Universal Graphics System), which was a
modeling and rendering pipeline designed for teaching.  It was all done in
Ada (yech!), but one of the nice things about Ada was that you could
overload modules much like LibGGI does, which let us recode chunks of the
TUGS code ourselves for learning purposes while still preserving the rest
of the stock system.

	Anyway, TUGS had pluggable shaders, which is where I got the idea. 
TUGS also had material codes (RGB reflection/refraction coefficents of
diffucse and specular reflection) independent of the shaders, but that was
essentially a stock "material code shader" and as such it didn't really
need to be separate.  I wrote a basic phong-model shader in this system.

> Sorry for stating opinions without explaining them, but I lack
> time currently, and I don't really want to know why we should
> have this or this feature, I would like to know why we should
> NOT have this or this feature...

	I'm not trying to say we should have these features as *options*,
but they should NOT be an integral part of the LibGGI3D representational
system.  We need to stay abstract, simple and flexible.  Save the
complexity for the rendering targets. 

> These are only direct opinions of course. I know these opinions are
> biased a lot by the specific hardware I've worked with...
> Once again, let me mention the fact that the Cirrus Logic Laguna 3D
> (CL-GD5464 or CL-GD5465) is pretty clear on these issues and that
> the documentation _is_ available:
>  (6,8 Mo) http://www.cirrus.com/ftp/pubs/gd5465trm.pdf
> (see also: http://www.cirrus.com/products/overviews/gd546x.html
>  for general info)
> 
> A side effect of this hardware background is: what about fog,
> transparency ? (other shaders ?)

	Same as it ever was.  Do it in software or in hardware depending
on the presence of HW support.  The higher layers pass whatever info is
needed to the shaders.

> It's not the Voodoo of the G200, but it may be useful as I find in
> this chipset a clear mapping with libggi3d as currently discussed.

	Same with Voodoo.

> Maybe the next step is to try to propose an API no ? (Sure, it will
> not be the final API, but it will delimit the target.)

	Yes.  I'll need a bit of help from those who are more knowedgeable
about LibGGI(2D) and extensions than I.  I'll post something to the list 
soon, probably in an updated RFC.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed

Index: [thread] [date] [subject] [author]