Index:
[thread]
[date]
[subject]
[author]
From: Jon M. Taylor <taylorj@ecs.csus.edu>
To : ggi-develop@eskimo.com
Date: Tue, 18 Aug 1998 22:02:00 -0700 (PDT)
Re: LibGGI3D RFC
On Wed, 19 Aug 1998, Scott McNab wrote:
> > OK, here's my first stab at nailing down my LibGGI3D system
> > proposal. Feedback is appreciated.
>
> A few good ideas, however I think there are a number of serious limitations
> to this system which will inhibit its usability. I havnt got time at the
> moment to comment properly but I'll try to mention a few off the top of my
> head to begin with:
>
> [stuff-cut]
>
> > II. Overall design
> >
> > So the keyword here is "simple". If OpenGL is too complex, then
> > let's look at what parts of OpenGL we *don't* need:
> >
> > * 3D world modeling. We don't need it, and it can be implemented on top
> > of LibGGI3D if needed. Our job is to do 3D drawing, not model worlds.
>
> Granted.
>
> > * Complex object representational schemes (polyhedra, surfaces,
> > constructive solid geometry). Again, this is a job for a higher-level
> > API. All of that stuff is tesselated down before rendering anyway, so
> > LibGGI3D can be designed with the assumption that such tesellation has
> > already been done.
>
> Sounds fair.
>
> > * Lighting, shading, texture mapping, or any other such algorithms. All
> > of these are methods for determining the color of a pixel based on certain
> > data. If we give up world-building (see above), we no longer have the
> > necessary information to shade pixels properly. We need to provide a
> > general way to shade pixels, but not make any assumptions about how that
> > shading should be done. That, again, is a job for higher API levels.
> >
> >
> > So, what are we left with? Essentially, we are left with two
> > basic concepts: 3D drawing and 3D shading. That is all we need in our
> > API. But of course there are features we will need to have in order to
> > implement these primitives cleanly and with maximum flexibility. So now
> > we get to the specifics of the LibGGI3D API design.
> >
> > The first feature is the notion of a camera. This is just two
> > sets of 3D coordinates which specify a viewpoint and a direction. This is
> > necessary to map the 3D coordinate space onto the 2D coordinate space of
> > the display.
>
> Umm hangon...the notion of a camera is directly related to the concept
> of a model space which we have decided we are not going to deal with.
Unless you want to force the camera to (0,0,-1) or something,
you'll need to be able to change it. I guess we could force a
standardized coordinate system and ditch the camera, but that could cause
complications maybe. I dunno. If it can work, it would be a useful
simplification.
> The triangle drawing code doesnt care where the triangles are in space,
> only what the screen coordinates & Z value are so it can fill the screen
> scanlines whilst applying perspective correction on textures etc (and
> Z-buffering).
Good point. You still need a Z scaling factor for scaling and
perspective correction, though.
> > Next is the notion of triangle drawing. LibGGI3D will have only
> > one drawing function, DrawTriangle(). The triangle is the fundamental
> > object of LibGGI3D. All other 3D shapes can be tesellated into triangles,
> > which are the prototypical polygon.
>
> A DrawTriangleStrip() primitive is almost as essential as a DrawTriangle()
> primitive because it has the potential to cut vertex bandwidth requirements
> by 1/3rd. It is supported by most new 3D chipsets and can also lead to a
> more efficient software rendering implementation too.
DrawTriangleSet is essentially that. Triangles give you polygons
and triangle sets (of which triangle strips are a subtype) give you shapes
and surfaces.
> > So, all LibGGI3D does is draw shaded triangles in a particular
> > coordinate system. End of story. With these simple tools, almost any
> > type of 3D hardware/software acceleration combo can be used and used
> > fairly well. Many common 3D video cards are based on triangles, and for
> > those that are not (infinite planes, polygons) we can still use triangles
> > or collections of triangles to represent polygons or polyhedra.
>
> There is a lot more to 3D than its coordinate system. If you are going to
> support coordinate systems directly then you will also need to support
> things such as view volume culling etc.
I realize this. A simple way to handle this is a pyramidal view
volume whose base is the screen and whose apex is the vanishing point. We
could also do a truncated pyramid, clipped off at the furthest Z
coordinate. I've done all this before.
> Not supporting this properly makes
> the API useless.
I thought it was a given. And it wouldn't make the API useless,
just slower as the 2D clipping catches all triangles rendered offscreen.
But 3D clipping is the way to go.
> > IV. Other stuff (FAQs)
> >
> > Q: What about z-buffering? You said you were going to make that part of
> > the API.
> >
> > A: I changed my mind. Once you start worrying about buffers, their
> > dimensions, their layout, their coordinate system, etc etc you open up a
> > whole can of worms. If people want these features, they can either write
> > a DirectBuffer equivalent or draw to a secondary depth buffer. LibGGI3D
> > is for drawing only. Extra stuff should have its own API.
>
> Z-buffering is an integral part of 3D rendering.
The notion of depth is an integral part of 3D rendering.
> If you dont support this
> at the rendering primitive layer then where else are you going to support
> it?
Rasterization is done at render time or applied to the triangle
vertices before sending them to the hardware. If people want a z-buffer
which the drawing functions render into, they can use a z-buffer rendering
target which will draw into an array. This could then be dumped to a
hardware depth buffer, rendered to a 2D frame buffer, or whathaveyou.
The point is not that I don't want people to have customized
buffers available. That is fine with me, and I might even write the code
for it! But LibGGI3D is not about that, it is about drawing shaded
triangles in 3D space.
> The whole point is that pixels are accepted or rejected individually.
The whole point is that pixels do not exist outside of an indexed
buffer system. If people want such a system, they can have the triangle
drawing routines render into such buffers. But that is just another
LibGGI3D drawing target. Not everyone needs or wants a discrete pixel
buffer.
> This means the higher level stuff doesnt need to worry about calculating
> how to render two polygons which may be intersecting in any number of
> ways. Drawing intersecting 3D objects is almost impossible without
> Z-buffering, or at least extremely computationally intensive and
> impractical.
See below. LibGGI3D doesn't try to do that. That isn't its
purpose. I don't want to bloat it with those concerns.
> > A: Nope. LibGGI3D does not concern itself with hidden surface removal.
> > None of the triangles are "aware" that the others exist. Remember,
> > there's no 3D world here. If you want a 3D world, write LibGGI3Dworld or
> > use OpenGL.
>
> Argh! It HAS to concernt itself with hidden surface removal. Here you
> have said you dont want a 3D world yet you want to be able to to draw
> triangles from given world coordinates looking in a specified direction.
> Thats a pretty good description of a world modelling system if you ask me.
No, it is a coordinate transform. A "world" has sets of objects,
lights, environment etc etc.
> > That's all for now. Take a look and let me know what you think.
>
> OK enough complaining, this is what I think :)
>
> 1. LibGGI3D should be a 3D primitive rendering system, just like
> LibGGI2D is a 2D primitive rendering system (well as far as I know).
That's the idea.
> Therefore it should provide support for things which are essential for
> rendering 3D primitives which are:
>
> - drawing triangles and triangle strips in 2D SCREEN SPACE with
> additional parameters for Z value (or 1/Z) and shading/texture
> coordinates etc.
I have to have the 3D coords to render into a z-buffer.
> - support for 3D specific things such as Z-buffer and possibly
> alpha buffer.
Which is done by drawing to a buffer target.
> This is similar to LibGGI2D which deals with (well should if it doesn't)
> things such as Points, Lines, BitBlt, etc which are 2D specific.
>
> 2. LibGGI3D should NOT concern itself with object->world->window space
> coordinate conversions. If hardware becomes available which can
> accellerate this through hardware matrix multiplication of vertices or
> something then this could be incorperated into a separate library.
It undoubtedly will be.
> This seems to be the biggest point of confusion regarding libGGI3D
> at the moment and I think its important to make a clear distinction
> between 3D primitive rendering and 3D object management/transformations.
>
> A good way to get a better understanding of these things is by downloading
> the 3DFX Glide programmers documentation.
I have done so, and even printed it and had it bound at Kinko's.
I have used Glide before - I wrote the LibGGI Glide display target. Not
too complex and no 3D, but... anyway, I will probably be using Glide as my
first LibGGI3D display target, so I will get to know it a lot better.
> I'm not saying Glide is the only
> way by any means but their library is well designed in many areas.
Their "library" is a device driver. It only has to take into
account a fixed hardware platform and this makes its design a lot simpler.
What if I want to have DrawTriangleSet() render to PowerVR infinite
planes, hm?
> The
> distinction between 3D world coordinates and 2D screen coordinates of
> rendered polygons is clearly separated from the library. To include this
> in the library makes it considerably more complex because suddenly you
> have to include stuff like view volumes, clipping, etc. This layer is
> what belongs in Mesa/libGGI3DWorld/etc.
LibGGI2D does clipping. Why shouldn't LibGGI3D?
> This system keeps the library clean and simple and makes it easy to
> provide hardware accelleration support for a range of different chipsets.
Huh? Glide supports 3Dfx hardware only.
> It would also map quite nicely to the Mesa device rendering subsystem
> if people wanted a complete 3D object management/transformation system
> yet still allow for specific cases to use the library directly.
>
> Anyway thats just my first thoughts on the matter. I'm open to comments/
> suggestions if you think my head is screwed on backwards ;)
Your points are mostly good, but I have to maintain
representational independence in order to be able to support a lot of
different types of hardware and software schemes. If I stay away from
buffers, I can use them when needed and still be able to do direct
rendering in software or hardware when needed.
Jon
---
'Cloning and the reprogramming of DNA is the first serious step in
becoming one with God.'
- Scientist G. Richard Seed
Index:
[thread]
[date]
[subject]
[author]