Index: [thread] [date] [subject] [author]
  From: teunis <teunis@computersupportcentre.com>
  To  : ggi-develop@eskimo.com
  Date: Thu, 20 Aug 1998 20:15:45 -0700 (MST)

Re: LibGGI3D RFC

On Thu, 20 Aug 1998, Jon M. Taylor wrote:

[clip]
> > How can you draw a textured triangle if you don't pass to the drawing
> > function the specific texture 
> 
> 	You would pass the *shader* the texture data.  Why does the
> drawing function (which I will assume for the sake of argument is a
> soft-rasterizer which calls the shader function for each pixel) need to
> know that information?  As long as it can tell the shader about the pixel
> coords/world coords/normal value/whatever other data it needs, the shader
> should have all the info it needs to computer and return a shade for that
> pixel.

AI!
-this- is what you're talking about?  A per-pixel shader?
*ouch*

Do you have any idea how -slow- that is?  You'll make DirectX look like a
supersonic jet next to libGGI3D's volkswagon beatle classic!
(yes - the jet is completely useless on a highway.  But it's -FAST- :)
[no I don't like Direct3D]

> > and, more important, the (u,v)
> > coordinates of the texture inside the triangle ????
> 
> 	Wouldn't you need this only once per triangle patch?  If your
> patch consists of 50 triangles, presumably you want to texture over the
> whole patch, no one triangle at a time.  The u,v for each triangle could
> be calculated on the fly in that case, couldn't it?

No.  you'd store the U,V once per each X,Y,Z in the triangle-patch.
Recalculating U,V is a pain!  (and not always possible)
NURBs made this sort of trick duable at all....
... wait a sec... (see below)
(I'm wrong but I'll come back here later....  I use a different
rendering/shading system [actually that one with 10000 functions?  I've
already gotten it written :] [but it needs re-writing!])

> > Do you mean x,y,z are in real 3D coordinates and the library itself
> > computes the 2D projection (hence the u,v coordinates) ?
> 
> 	That's what I had in mind.  After, all the texture is a 2D bitmap. 
> Yes, you need u, v values, but only one u,v offset (and offset angle?) per
> triangle patch.  Unless you want to tile more than one texture per patch
> or do multitexturing, in which case you need more info.  But the info is
> still per-patch, not per-triangle. 

hmm.  You use different rendering methods than me...  I precalc the U,V
for each corner than interpolate....  (gurus?  There -were- some out there
somewhere)

> 	I learned to do texture mapping in school by projecting the 2D
> texture bitmap onto a 3D projection enclosing the surface patch to be
> textured, and then projecting the texture inward onto the surface.  So you
> have two mapping transforms: texture [u, v] to projection [s, h] and then
> projection [s, h] to surface [x, y, z].  The transform [u, v] -> [s, h] is
> where you can tile, scale, rotate, correct perspective, blend textures,
> etc.  That's how I know how to texture map.

hmmm....  wait!  I don't use floating-point in U,V... Mayhaps that's where
our algorithms differ (adding a subpixel fudgefactor's farely easy in
integer interpolated triangle)...  hmmm perspective (below).  I think I'm
gonna have to think a bit...

> 	There are easer/faster methods, like creating a 2D cartesian
> mapping [u', v'] over the surface patch and then transforming [u, v] ->
> [u', v'], but that loses perspective correction IIRC.  I doubt not that
> there are a billion other way to texture map.  I'm just going by what I
> know, and with what I know you don't map [u, v] directly to the surface,
> and as such you don't need per-triangle [u, v] offsets.  That's done in
> the projection step.

There's other ways of fixing up perspective-correction too...  I'll take a
peek at how the HW people do it...  Maybe basing the lib on a hw-reference
WOULD be the best way?
(I've got S3-ViRGE + dox..  I'll take a peek at Laguna dox a little later :)
[but those are -primitive- HW systems]

G'day, eh? :)
	- Teunis

Index: [thread] [date] [subject] [author]