Index: [thread] [date] [subject] [author]
  From: Jon M. Taylor <taylorj@ecs.csus.edu>
  To  : James A Simmons <jsimmons@acsu.buffalo.edu>
  Date: Mon, 17 Aug 1998 22:51:40 -0700 (PDT)

Re: 3d

On Mon, 17 Aug 1998, James A Simmons wrote:

> On Mon, 17 Aug 1998, teunis wrote:
> 
> > On Mon, 17 Aug 1998, James A Simmons wrote:
> > 
> > Yes a simple rendering stack can be done in (thinking) <8K, but that
> > doesn't mean that yer typical modern accelerator (ie: Permedia) can be
> > done so easily.  (and you are stuck with just simple rendered triangles
> > with maybe a Z-buffer if you're feeling happy :)
> 
> Yes. That is what I am suggesting.

	Yes.  That is also what I am suggesting.  See my z-buffer/
polygon/pluggable shader system elsewhere on this thread.  It can be done
and done cleanly.

> > Though here's something:
> > 	How about a microcode IOCTL interface.  Just a standard way
> > 	of forwarding microcode into/out-of a graphics card.
> > 	- This is for those programmable cards (ie: 3DFX,Permedia,...)
> > 	! Oh - and some kind of back-hooks (RT-Linux?) to hook responses
> > 	  from videocard into a kernel-service. Simple use (many
> > 	  videocards) is IRQ handling on video-refresh.  Complex could
> > 	  include DMA Accel-handling and hooks from 3D rendering engines :)
> > 
> > What do y'all think?

	It is a logical extension of the LibGGI dynamic driver-library
idea.  You can just get some of the bridge code a lot closer to the
hardware than before. 
 
> Bingo. This is what I'm talking about. 

	??? I thought you were talking about LibGGI3D.  This would be a way to
run the LibGGI3D display target code right on the video card.

> Here is my idea.
>
> Idea I.
> 
>    The ability to allocate frames and tell what type of frames they are.
>   Basically saying hey give me three frames. One display frame, one
>   zbuffer, and one double buffer. Most Hardware support this. Lets not
>   over do it for types of frames.

	Why not let people define arbitrary frame types and let people 
plug in handlers?

> Idea II.
>    As for primivites. Support arbitratory lines. Support span lines (aka
>   horizontal lines). Do NOT support things like polygon or B Spline
>   curves. 

	Curves, I agree.  They can always be rendered into polygons.  But
I'd balk at using anything more primitive than a polygon (triangles, etc)
because a polygon is the prototype 2D object and a triangle isn't. 
Polygons can be tesellated into triangles quickly, and if your hardware
doesn't work with triangles it is a lot easier to turn polygons into
whatever the hardware needs than it will be to try to put all those
triangles back together into polygons.

>   This would bloat the kernel but a library could break down a
>   polygon into span lines which is accelerated into the kernel. 

	Another reason to base the system around polygons innstead of
triangles. 

>   Do support
>   boxes and triangles but thats it. Triangles are nice because a well
>   written library could make curved shapes from triangle strips. 

	The whole job of tesellating any more abstract representational
schema (polyhedra, surfaces, constructive solid geometry, etc) into
polygons can be done by the code that uses LibGGI3D.  But the last step of
tesellating polygons into triangles is not always appropriate for the
hardware, and even when it is appropriate it is always the same process so
it won't hurt to have to tesselate the polygons in LibGGI3D's display
target code rather than in the code using LibGGI3D.

>   B Spline
>   and things like this can be broken down by the library into basic lines.
>   With these shapes just support factes. 

	Facets.  Right.  Facets == polygons. 

>   Not whole objects like a 3D box
>   at one time.  

	Right.  Polyhedra are collections of polygons, so LibGGI3D doesn't
need to know about them. 

> Idea III.
>    Support alpha component in color. 

	Depth cueing.  DirectDepthBuffer will give you that.  You could
draw to a depth buffer and have it render the depth info into the alpha
component, or you could do a software alpha channel.

>   Support gourand shading. Do not
>   support texture mapping. This is the really mess out there. As for
>   lighting effects stay away from this as well. 

	All of this and more can be handled with pluggable shaders.  Just
have the high-level code pass a pointer to a shader function and pass that
function a pointer to a data structure containing all the info that the
shader needs.  This could be as simple as one RGB color value for a flat
shader, or it could be a light list, bump map, set of vertex normals,
texture bitmap, or parameters to a procedural texture function.

>   Although I have a
>   interesting idea to handle this. This could be simulated by a color
>   function table. Sort of like how the 3Dfx cards do it. They can't do
>   lighting but have a color blend function table. 

	Extend this idea to arbitrary shading algortithms. 

>   Also looking at the
>   ATI Mach64 Rage 3D card has a similar function table. I will have to
>   work on this later. 
> 
> Thank you. I hope this clearify everything.
> 
> PS. How is the linux Makefile system coming. Please we need to get this
> ready in the next few days. I'm home all day tomorrow so I want to write 
> the Makefile system. We need to get are drivers into the kernel. This is
> numbe one to do. Once in the kernel we can continue on improving the
> drivers.

	2.2 is in a code freeze.  We may not make it.

Jon
 
> 

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed

Index: [thread] [date] [subject] [author]