Index: [thread] [date] [subject] [author]
  From: Aaron Gaudio <icy_manipulator@mindless.com>
  To  : ggi-develop@eskimo.com
  Date: Mon, 1 Feb 1999 12:58:36 -0500 (EST)

Re: libggi3d non-clarity

And lo, the chronicles report that David Joffe spake thusly unto the masses:
> 
> >        No.  There will be certain modules which will be tasked with 
> >managing all those different bump mappers intelligently, and your app 
> >should not have to worry about that stuff.  Ideally, all apps which used 
> >LibGGI3D components directly (instead of going through an OpenGL wrapper) 
> >would render to a high-level tesselator and all the lower-level details 
> >(like bump mapping) would be handled completely behind the scenes. 
> 
> Wouldn't this imply that you would have to have a module, or set of modules,
> that then "knows about" most features available in hardware? That module would
> have to know about bump-mapping, and many other things -- this seems contrary
> to the idea of having a library that does not need to know about all these
> things. These "arbitration" modules are going to grow and grow, as they need to
> know about more features .. until eventually you'll have something that "knows
> about" as much as some of the major 3d api's -- openGL, Direct3D etc. You can
> only "intelligently" interface with a feature if you actually know about that
> feature and how it works. It becomes very big, but perhaps this is the way
> things "have to be". Traditionally, your "easy-to-use" slower, high-level API's
> are just built on top of the traditional-style low-level API's like OpenGL. Is
> this not good enough?

IMO, OpenGL is not a very low-level API. Glide is a low-level API. But
anyways, the problems that GGI3D would *like* to solve seem to be the
static nature of these APIs. From what I understand of it, GGI3D tries to
relieve this using modules and some componentization. Rather than have to
provide an entirely new implementation of OpenGL or Direct3D, a hardware
manufacturer need only provide new accelerated modules to use at the lowest
level possible. When new 3D features are introduced (like infinate planes),
then some higher-level modules will need to be introduced. I think the
standardization of GGI3D should be more or less a standard set of modules,
perhaps dilinieated by namespace rules. This is where the standard would be,
and yes it would be somewhat constricting. But rather than provide a new
implementation with one's enhancements, as you must do for OpenGL (until
the enhancements become standard), you only need provide the modules which
extend GGI3D. Here's the jewel though: you can alter the current rendering
pipeline so that it can take advantage of your new, non-standard module 
transparent to the client software.

> 
> I personally don't see any way around the whole tendency-to-bloat thing.

Neither do I. But I don't think GGI3D should be seen as an answer to the
tendency-to-bloat thing. In fact, I don't think an expanding 3D feature-set
is such a bad thing. What it is, in fact, is a system to allow for
graceful bloating, by allowing extensions to work more or less seemlessly
with the current pipeline, as I illustrated above, by adjusting the pipeline
dynamically (dynamically not necessarily on the fly, but perhaps at an
initialization step of loading the GGI3D runtime).

>I don't
> believe that dynamically fine-tuning/creating various graph/node configurations
> is going to be enough to embed the kind of "intelligence" required here for
> utilising new features/modules that pop up.

No, it won't. What it will (or should) make possible is for the designers
of new features/modules to intelligently integrate those modules into the
current system. An example: lets say you are operating at the scene model
level. You describe a scene, and the scene is raytraced for output. Let's
say that raytracing is the "standard" pipeline. Now, someone writes a
module that uses radiosity to achieve better lighting. For the sake of
argument, your current GGI3D system knows nothing about radiosity. All you
would have to do is to plug the new radiosity into the current pipeline
in the proper place (which would have to be defined by the module author), 
and wala, you got radiosity, but you never had to recompile anything, you
didn't have to recreate the pipeline, you merely adjusted it.

The crux of this is that wherever the radiosity module fits in the pipeline,
it may have a different interface than what normally was there. The ability
to discover this interface and provide what the module needs to do its work
is *the* factor which will determine the success of GGI3D over more
traditional methods. This may incur some overhead, and that may concern
you, but I don't think there is anything in the GGI3D model that will
prevent you from tweaking the pipeline. Perhaps if you are extremely
concerned with tweaking you will provide a very specialized module that
does most of the work of the "traditional" GGI3D pipeline, only in a highly
specialized (hardware or situation dependant) way. It's just that in this 
case you have to get your hands dirty with GGI3D modules. No matter what
environment you use in this case, you're gonna get your hands dirty.

> 
> >        There are many ways to do this of course, but here's what I would 
> >do if I were writing this.  I would render directly into a generic 
> >rasterizer component.  It is quite likely that such a component would come 
> >with LibGGI3D.  This component would take as input a dataset which 
> >describes the low-level "world" you want to rasterize, as well as how you 
> >want it to be rasterized (bump mapping, phong shading, texturing etc).   
> >Then the component will look to see if you are rendering to, say, a Glide 
> >target or an fbdev target. 
> 
> This is fine. As long as you always have this sort of thing as an option, there
> should not be any real problems. But it does imply some sort of standardization
> of the interfaces of ALL those features (including new ones coming out in the
> future.) I suppose if this "standardized primitive layer" was just another
> module on top of the lower-level modules, this is still fine. But it still
> becomes "part of the libggi3d distribution", and still leads to that bloat that
> you're afraid of - only it's bloat with a different internal design.
>

I'm not afraid of bloat. If I'm writing to an API, I don't care how bloated
it is, as long as I can find out how to do what I have to do. All the other
"bloat" won't be used by me; so that's an issue of documentation. The problem
as I see it is that in the traditional 3D API model, when a new feature
is added, if it is a low-level feature, it requires a complete re-write of
the rendering pipeline. That's the problem that I gathered from Jon: OpenGL and
Direct3D both try to make their pipeline robust enough that optimizations
and alternate feature sets are checked for in the pipeline...in other words
is the pipeline itself that gets bloated with all the hooks into features
and accelerations, and in the end, the pipeline is limited by what hooks
were put into it in the beginning. GGI3D would attempt to solve this by
not having hooks (which is why I think having a fixed set of things for
an arbitrator module to look for is bad), instead you register a new module,
which represents a new feature or acceleration, into a specific part of
the pipeline, without otherwise altering the pipeline. 

> Personally I think that with something as complex as a 3D API, you need to
> export a massive, growing, complex featureset to developers.

Yes of course, but the question is can we keep the pipeline from bloating
while the feature set expands? I think with enough thought we can. Not as
a criticism to you Jon, but coming from the software engineering standpoint,
I don't think we have a design yet, we have ideas and some code. That's fine,
but there is a danger of making the code you're writing now define GGI3D. 
I hope that you consider this code experimental and not the "true" GGI3D, and
won't mind throwing it away or otherwise re-engineering it to fit with 
the "true" GGI3D design, when it comes up.


>Until such time
> that computers are so fast that they can render worlds that look real, in
> real-time (at least 10-15 years away), this is still going to be an issue,
> especially for those at the "cutting edge" (mostly game developers.) A world
> like the one in "Unreal" is not going to be describable/renderable with
> reasonable frame rates if done only in terms of high-level scene descriptors
> any time soon.

I think this is irrelevant to GGI3D itself because most GGI3D modules will
exist at a much lower level than scene description. Think of GGI3D as being
the glue between high-level APIs like OpenGL and driver APIs like Glide. 

>Maybe, two years from now, it will be feasible - but by then,
> the cutting-edge games will have so much more in them, and they'll still be
> wanting to get the most out of the hardware.

This will always be the case, it's the classic predator-prey model (I really
harp on that don't I Jon? ;-) applied to computer science: computer resource
needs will grow proportionaly to the availability of computer resources. For
instance, no matter how much memory you have, you need twice as much =)

> 
> >responsibility for knowing about all the nasty little particulars.  If you 
> >render to a higher level of abstraction, you may lose a little speed or 
> >whatever but you will save so much hassles that it will almost always be 
> >worth it.
> 
> I work for a company that develops Virtual Reality applications, things like
> training simulations of industrial environments, military simulations etc. We
> use PC's (typically Pentium II 400's with 128 MB ram, plus hardware accels etc)
> and Direct3D. In almost every case, rendering to higher levels of abstraction
> resulted in *significantly* lost speed -- in each of these cases, we *had* to
> code pretty low-level, and do crap-loads of tweaking, clever techniques here
> and there etc just to get reasonable frame-rates. (The difference we're talking
> about here is about between 2 fps and 30 fps. Trying to code higher-level was
> not an option if we wanted to produce usable programs.) I'm sure if you talk to
> the guys who develop all the "real" games, the cutting-edge games, like Unreal,
> Quake III etc, you'll find they'll tell you the same thing. Trying to save
> yourself the "hassles" of knowing about all the nasty little particulars
> unfortunately results in unusably slow programs, and that is just the way
> things are.

But there is no reason that you need to know *all* the nasty particulars, or
else everything would be programmed at the hardware level: manipulating
registers, texture memory, etc. As you said, you worked with Direct3D...well
Direct3D is a higher-level API and it was introduced for the exact same
reason as OpenGL. It could (conceptually) be implemented using GGI3D modules,
so in that regard, GGI3D exists at a lower level thtn Direct3D (although
D3D would provide API access to different levels of the pipeline because of 
ts design).

I just don't see anything in GGI3D which prevents you from tweaking. It's just
that tweaking GGI3D will be (at most) just as complicated and error-prone
and system-dependant as tweaking D3D or OpenGL.

> 
> 
> >> How would an app developer requiring fairly precise control over what 
> >> primitives are being issued to the hardware (but still in a hardware-independent 
> >> manner) access this functionality? Would he spend his coding life messing about 
> >> with modules? 
> > 
> >        Perhaps, if he is doing something really nonstandard. 
> 
> I think doing something "really nonstandard" is not uncommon in 3d programming.
> 

In fact, I think doing something really nonstandard is acutally standard
in 3D programming ;-)

> >> In this sense I would rather have OpenGL-like function calls. 
> > 
> >        OpenGL is not going to buy you any more ease-of-programming in 
> >this department.  The problem of having to know how the lower levels work 
> >remains, just in a different form.  The nice thing about LibGGI3D is that 
> >it places fewer restrictions on the ways in which lower-level routines 
> >can be encapsulated into higher-level ones.  The result should be that 
> >the muching around in lower levels should be kept to a minimum. 
> 
> This is a nice idea, in theory. I must admit I am skeptical as to how well it
> can be pulled off.

But you admit the idea is nice, so it's something worth trying.

> 
> >        I think the opposite will happen, especially when people stop 
> >hyper-tweaking their 3D engines.  Hyper-tweaking always results in having 
> >to know more about the low-level stuff.  If you want to make your life 
> 
> Games developers who want to write cutting-edge (what a kitch word) games have
> to (and I believe should have to) know all the low-level stuff, and do lots of
> hyper-tweaking. Do you really believe that "smart" linking together of modules
> can bring about framerates that are as good as a good 3d-coder doing
> hyper-tweaking? This makes me nervous.

Nope. In fact, I don't see any "smart" linking. The "smart" part of the linking
comes from the module and API developers (and those who need to tweak things
for their programs), GGI3D just provides a convienient way to do the linking
without rewriting the entire pipeline.

-- 

¤--------------------------------------------------------------------¤
| Aaron Gaudio                   mailto:icy_manipulator@mindless.com |
|                    http://www.rit.edu/~adg1653/                    |
¤--------------------------------------------------------------------¤
|      "The fool finds ignorance all around him.                     |
|          The wise man finds ignorance within."                     |
¤--------------------------------------------------------------------¤

Use of any of my email addresses is subject to the terms found at
http://www.rit.edu/~adg1653/email.shtml. By using any of my addresses, you
agree to be bound by the terms therein.

Index: [thread] [date] [subject] [author]