Index: [thread] [date] [subject] [author]
  From: Jon M. Taylor <taylorj@ecs.csus.edu>
  To  : ggi-develop@eskimo.com
  Date: Thu, 20 Aug 1998 21:26:41 -0700 (PDT)

Re: LibGGI3D RFC

On Thu, 20 Aug 1998, teunis wrote:

> On Thu, 20 Aug 1998, Jon M. Taylor wrote:
> 
> [clip]
> > > How can you draw a textured triangle if you don't pass to the drawing
> > > function the specific texture 
> > 
> > 	You would pass the *shader* the texture data.  Why does the
> > drawing function (which I will assume for the sake of argument is a
> > soft-rasterizer which calls the shader function for each pixel) need to
> > know that information?  As long as it can tell the shader about the pixel
> > coords/world coords/normal value/whatever other data it needs, the shader
> > should have all the info it needs to computer and return a shade for that
> > pixel.
> 
> AI!
> -this- is what you're talking about?  A per-pixel shader?

	The *option* of a per-pixel shader.  It is called for in some
cases, not in others.  The shader functions are just ways to return a
shade given some metadata.  How they are designed and used is
implementation-specific. 

> *ouch*

	In many cases, yes.  I have done goraud shaders that work like
this, and yes they are quite slow.  Them's the breaks.  It is a slow
algorithm.

> Do you have any idea how -slow- that is?  You'll make DirectX look like a
> supersonic jet next to libGGI3D's volkswagon beatle classic!

	Well software is slow no matter how you slice it.  Ideally
everyone would use hardware that can shade.  But not everyone can.

> > > and, more important, the (u,v)
> > > coordinates of the texture inside the triangle ????
> > 
> > 	Wouldn't you need this only once per triangle patch?  If your
> > patch consists of 50 triangles, presumably you want to texture over the
> > whole patch, no one triangle at a time.  The u,v for each triangle could
> > be calculated on the fly in that case, couldn't it?
> 
> No.  you'd store the U,V once per each X,Y,Z in the triangle-patch.
> Recalculating U,V is a pain!  (and not always possible)

	OK, I will take your work for this as I have never done it myself.

> NURBs made this sort of trick duable at all....

	I've never done full-on NURBS, just simple bezier curves and
surfaces.  NURBS was explained to me briefly once, but.... 

> > > Do you mean x,y,z are in real 3D coordinates and the library itself
> > > computes the 2D projection (hence the u,v coordinates) ?
> > 
> > 	That's what I had in mind.  After, all the texture is a 2D bitmap. 
> > Yes, you need u, v values, but only one u,v offset (and offset angle?) per
> > triangle patch.  Unless you want to tile more than one texture per patch
> > or do multitexturing, in which case you need more info.  But the info is
> > still per-patch, not per-triangle. 
> 
> hmm.  You use different rendering methods than me...  

	Possible.  I have seen a s---load of different ways to do texture
mapping floating around out there. 

> I precalc the U,V
> for each corner than interpolate....  

	"Shrinkwrapping", which is what my instructor called the projection
technique I describe below, is *supposed* to do the same thing.  Your
technique is a bit simpler - it doesn't require you to determine a
bounding volume for the surface and project the texture onto the inside of
that volume.  That can be quite tricky when objects get complex or have
concavities. And for surfaces with fewer triangles, it is probably faster
than shrinkwrapping.

	But with shrinkwrapping, projecting inward from a spherical
mapping onto a polyhedron with 50 facets is no more expensive than
projection onto one with 5000 facets.  Your technique would choke to death
on all the interpolations with a surface that complex.  I just have to
project each facet's normal out to the spherical mapping and see what
pixel color is there.  And best of all, I don't have to store a [u, v]
pair for each vertex.  Especially when you are using floats for
coordinates, that substantially increases your storage requirements.

	Each method has good and bad points.  Mine would probably be
unuseable for an action game where the surfaces are not that complex,
because the bounding volume/surface would have to be regenerated as the
vertices changes position in relation to each other.  But yours would take
a year to render a static, highly-detailed scene.  Mine would choke on
complex object topologies, yours on complex object detail.  And I am sure
that there are many, many other texture mapping algorithms with their own
unique strangths and weaknesses.  It is a quite complex problem.

	This is why we need to keep the texturemapping(shading) system as
separate from the triangle coordinate representational scheme as possible. 
If precalculation, interpolation, etc needs to be done, fine.  But do it
in the renderer or the shader, so that other texture mapping techniques
aren't burdened with a representational scheme they don't use.  Keep the
other stuff in the metadata sets, where it can be anything the shader
needs.

> (gurus?  There -were- some out there
> somewhere)

	Yes please |->.

> > 	I learned to do texture mapping in school by projecting the 2D
> > texture bitmap onto a 3D projection enclosing the surface patch to be
> > textured, and then projecting the texture inward onto the surface.  So you
> > have two mapping transforms: texture [u, v] to projection [s, h] and then
> > projection [s, h] to surface [x, y, z].  The transform [u, v] -> [s, h] is
> > where you can tile, scale, rotate, correct perspective, blend textures,
> > etc.  That's how I know how to texture map.
> 
> hmmm....  wait!  I don't use floating-point in U,V... Mayhaps that's where
> our algorithms differ (adding a subpixel fudgefactor's farely easy in
> integer interpolated triangle)...  hmmm perspective (below).  I think I'm
> gonna have to think a bit...

	That's one difference.

> > 	There are easer/faster methods, like creating a 2D cartesian
> > mapping [u', v'] over the surface patch and then transforming [u, v] ->
> > [u', v'], but that loses perspective correction IIRC.  I doubt not that
> > there are a billion other way to texture map.  I'm just going by what I
> > know, and with what I know you don't map [u, v] directly to the surface,
> > and as such you don't need per-triangle [u, v] offsets.  That's done in
> > the projection step.
> 
> There's other ways of fixing up perspective-correction too...  I'll take a
> peek at how the HW people do it...  Maybe basing the lib on a hw-reference
> WOULD be the best way?

	I don't want to base the lib on ANY one single way!  If LibGGI3D
canot accomodate a particular technique, or if it has to haul around
metadata ([u, v] and friends) that won't be used in a particular 
that means that the LibGGI3D design is getting into implementation issues. 
I emphatically do NOT want to bind LibGGI3D's design to any particular
rendering or shading implementations.  That way lies OpenGL.

	I only want these base concepts in LibGGI3D: 3space, triangles,
triangle sets with metadata, and shaders with metadata.  That is IT. 
Maybe voxels and 3d lines if people want.  But no implementation
specifics!  LibGGI3D should be simple and flexible enough to use with
*any* shading or rendering techniques.  if it isn't, the solution is to
make its design more flexible, not hard-code any particular implementation
into the API.

Jon


---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed

Index: [thread] [date] [subject] [author]