Index:
[thread]
[date]
[subject]
[author]
From: Evan Martin <txs@concentric.net>
To : ggi-develop@eskimo.com, sengelha@uiuc.edu
Date: Sat, 17 Apr 1999 10:14:56 -0700
Re: GGI usage question
Steven Engelhardt wrote:
[snip]
> To draw the current image, I used the (highly suboptimal) method of:
> int i, j;
>
> for (i = 0; i < graphics->x; i++) {
> for (j = 0; j < graphics->y; j++) {
> ggiPutPixel(graphics->vis, i, j,
> graphics->palette[graphics->memory[i][j]]);
> }
> }
>
> ggiFlush(graphics->vis);
>
> While this method gives acceptable performance on my computer in
> asynchronous modes (such as windowed X), it is horribly slow in SVGAlib.
By calling ggiPutPixel repeatedly, you're only drawing one pixel at a
time.
You could add an intermediate step to render graphics->memory to a
buffer
puttable by ggiPut*, like this:
const ggi_pixelformat *pf = ggiGetPixelFormat(graphics->vis);
int pixelbytesize = pf->size/8;
uint8 *pixelbuf = (uint8*)malloc(graphics->x*graphics->y*pixelbytesize);
uint8 *p = pixelbuf;
for (i = 0; i < graphics->x; i++) {
for (j = 0; j < graphics->y; j++);
*(ggi_pixel*)p = graphics->palette[graphics->memory[i][j];
p += pixelbytesize;
}
}
ggiPutBox(graphics->vis, 0, 0, graphics->x, graphics->y, pixelbuf);
free(pixelbuf);
ggiFlush(graphics->vis);
The only reason it's somewhat complicated is that you need to handle
multiple color depths.
Misc Notes:
- Of course, you'd probably malloc pixelbuf once, at startup.
- "pixelbytesize" could be renamed "depth", I was just trying to be
clear.
- This code also uses an extra width*height*depth bytes of memory.
It could be split into lines, and put with ggiPutHLine().
That would be slower, but use less memory.
I hope this helps.
Evan Martin
txs@concentric.net
http://e.x0r.ml.org
Index:
[thread]
[date]
[subject]
[author]