Index: [thread] [date] [subject] [author]
  From: teunis <teunis@computersupportcentre.com>
  To  : ggi-develop@eskimo.com
  Date: Fri, 21 Aug 1998 12:35:09 -0700 (MST)

Re: LibGGI3D RFC

On Thu, 20 Aug 1998, Jon M. Taylor wrote:

> On Thu, 20 Aug 1998, teunis wrote:
> 
> > On Thu, 20 Aug 1998, Jon M. Taylor wrote:
> > 
> > [clip]
> > > > How can you draw a textured triangle if you don't pass to the drawing
> > > > function the specific texture 
> > > 
> > > 	You would pass the *shader* the texture data.  Why does the
> > > drawing function (which I will assume for the sake of argument is a
> > > soft-rasterizer which calls the shader function for each pixel) need to
> > > know that information?  As long as it can tell the shader about the pixel
> > > coords/world coords/normal value/whatever other data it needs, the shader
> > > should have all the info it needs to computer and return a shade for that
> > > pixel.
> > 
> > AI!
> > -this- is what you're talking about?  A per-pixel shader?
> 
> 	The *option* of a per-pixel shader.  It is called for in some
> cases, not in others.  The shader functions are just ways to return a
> shade given some metadata.  How they are designed and used is
> implementation-specific. 
> 
> > *ouch*
> 
> 	In many cases, yes.  I have done goraud shaders that work like
> this, and yes they are quite slow.  Them's the breaks.  It is a slow
> algorithm.
> 
> > Do you have any idea how -slow- that is?  You'll make DirectX look like a
> > supersonic jet next to libGGI3D's volkswagon beatle classic!
> 
> 	Well software is slow no matter how you slice it.  Ideally
> everyone would use hardware that can shade.  But not everyone can.

Hmm..  I've been looking at how OpenGL handles things and things actually
seem duable :)
[I see the weaknesses in OpenGL 1.0...  now to check against OpenGL 1.2]
[and perhaps Renderman - but that's not so built for efficiency afaik.
Povray certainly isn't!]

> > > > and, more important, the (u,v)
> > > > coordinates of the texture inside the triangle ????
> > > 
> > > 	Wouldn't you need this only once per triangle patch?  If your
> > > patch consists of 50 triangles, presumably you want to texture over the
> > > whole patch, no one triangle at a time.  The u,v for each triangle could
> > > be calculated on the fly in that case, couldn't it?
> > 
> > No.  you'd store the U,V once per each X,Y,Z in the triangle-patch.
> > Recalculating U,V is a pain!  (and not always possible)
> 
> 	OK, I will take your work for this as I have never done it myself.

*heh*.  Looks like there are standard re-calc functions... (plural).  But
still precalculating is alot faster...  (I precalculate)

> > NURBs made this sort of trick duable at all....
> 
> 	I've never done full-on NURBS, just simple bezier curves and
> surfaces.  NURBS was explained to me briefly once, but.... 

NURBs are fun! :)  (I made my own kit - which is how I precalculate 3D
objects these days :)
[there was a serious lack of textured VRML files last I looked....  at
least ones I could download as I have no 'net support in the viewer yet]

[clip]
> > I precalc the U,V
> > for each corner than interpolate....  
> 
> 	"Shrinkwrapping", which is what my instructor called the projection
> technique I describe below, is *supposed* to do the same thing.  Your
> technique is a bit simpler - it doesn't require you to determine a
> bounding volume for the surface and project the texture onto the inside of
> that volume.  That can be quite tricky when objects get complex or have
> concavities. And for surfaces with fewer triangles, it is probably faster
> than shrinkwrapping.

Yep.  It's also pretty fast if you take the complex surface and do the
mapping then just store the resulting triangles for future use (NURBs :)

Now that I see it this is how that NURBs code I've got does this...  I
have not got much 3D technical background for terminology!  (I don't know
the terms for a lot of things).

> 	But with shrinkwrapping, projecting inward from a spherical
> mapping onto a polyhedron with 50 facets is no more expensive than
> projection onto one with 5000 facets.  Your technique would choke to death
> on all the interpolations with a surface that complex.  I just have to
> project each facet's normal out to the spherical mapping and see what
> pixel color is there.  And best of all, I don't have to store a [u, v]
> pair for each vertex.  Especially when you are using floats for
> coordinates, that substantially increases your storage requirements.

My code doesn't choke on interpolations....  *grin*.
I use integer interpolations that are combined with the renderer and the
overhead is equivalent to drawing 1.7 flat-colour triangles :)
[on the flip side, the memory-read to grab colour for texture slows
things down due to cache-hits].

AFAIK the method I use is alot like most video hardware..... :)

No I don't use floating-point....  at least not at the rendering.  (I
tried once and that was *slow*.)  But this is more like a buffer
in-between stage rather than the initial storage form.  The base storage
form I use works the same as you describe here :)

Incidentally, mine does -not- bog down on complex detail *grin*.  At least
not AFAIK....

> > (gurus?  There -were- some out there
> > somewhere)
> 
> 	Yes please |->.

ddt@crack.com used to follow this echo amongst other people...  I wonder
where he went?

[clip]
> > > 	There are easer/faster methods, like creating a 2D cartesian
> > > mapping [u', v'] over the surface patch and then transforming [u, v] ->
> > > [u', v'], but that loses perspective correction IIRC.  I doubt not that
> > > there are a billion other way to texture map.  I'm just going by what I
> > > know, and with what I know you don't map [u, v] directly to the surface,
> > > and as such you don't need per-triangle [u, v] offsets.  That's done in
> > > the projection step.
> > 
> > There's other ways of fixing up perspective-correction too...  I'll take a
> > peek at how the HW people do it...  Maybe basing the lib on a hw-reference
> > WOULD be the best way?
> 
> 	I don't want to base the lib on ANY one single way!  If LibGGI3D
> canot accomodate a particular technique, or if it has to haul around
> metadata ([u, v] and friends) that won't be used in a particular 
> that means that the LibGGI3D design is getting into implementation issues. 
> I emphatically do NOT want to bind LibGGI3D's design to any particular
> rendering or shading implementations.  That way lies OpenGL.

I would love to -start- from a hardware perspective.  And there's nothing
specifically wrong with OpenGL (that I can think of) - only with
implementations...  [though if the rendering/shading/... section were a
little more open to additions then infinite planes/etc wouldn't be a
problem]

> 	I only want these base concepts in LibGGI3D: 3space, triangles,
> triangle sets with metadata, and shaders with metadata.  That is IT. 
> Maybe voxels and 3d lines if people want.  But no implementation
> specifics!  LibGGI3D should be simple and flexible enough to use with
> *any* shading or rendering techniques.  if it isn't, the solution is to
> make its design more flexible, not hard-code any particular implementation
> into the API.

hmmmm...  I'll see what I can writeup this weekend...

incidentally:
	only "float" support means that 64bit hw suffers; fp probs
	only "double" support means that 32bit hw suffers; fp probs
	only "integer" support means that accuracy at some point suffers
	- OpenGL supports all 3 access methods so drivers/libs can decide
	  what conversions should be done...

fp probs:  most HW I know of uses integers for coordinates in 3D rather
than floating-point.  There's a conversion here that would have to take
place...  but IMHO that would take place in -some- component of 3D engine
anyways.  (why libGGI3D though?)

G'day, eh? :)
	- Teunis

Index: [thread] [date] [subject] [author]