Index: [thread] [date] [subject] [author]
  From: Andreas Beck <becka@rz.uni-duesseldorf.de>
  To  : ggi-develop@eskimo.com
  Date: Wed, 18 Aug 1999 19:35:25 +0200

Re: Accels and /dev/gfx.

Hi !

> My understanding is that to support switching VT's, you have to
> preallocate all of the possible terminals for memory efficiency.

No. You have to allocate text memory for all VTs to allow to VT-switching
without loosing VT contents.

For text mode, there is no way to redraw at all. Almost no application 
supports it (a few ncurses apps that do for paranoia are the exceptions),
and it would cause a very unnecessary bloat for all apps, if the app would
have to.

So o.k. - we really need memory for the text mode (that is 2 byte per
text cell, or 4kByte for a standard 80x25 Mode). 

However:

> And, for 56 MB frame buffers, allocating this much memory isn't
> possible.

And not reasonable either. Most graphics applications can trivially be
written to be able to redraw. Unlinke text apps, most graphics apps
shouldn't run in the background anyway, and if so, they will very probably
want to save the time to draw something noone is looking at. They might
still want to continue downloading the movie they display or still play the
audio channel or something, but usually the actual display should be turned
off.

> Are we really talking a factor of 10,000 times here? That to avoid
> wasting 3-7K per terminal (say 20-45K total), we won't be able to
> swap a 56 MB card?

No. The problem is about completely virtualizing the consoles or not.

If we do, we can run into such no-fun situations. scrdrv 0.0.7 was able to
do it. A usermode "ramdaemon" was called to save away the VideoRAM and when
the app got switched back, the RAM was saved back. However don't ask how
long this would take, if we were running on such a highend card.

> I could understand if the concern is "10 MB/sec SCSI will take 11
> seconds to swap out and in, that's too long". So require that you
> have enough memory to swap the displayed image to ram, page in the
> new one (5 seconds, about what it takes to re-synch from X to text
> anyways), and then page the old one out when the disk queue isn't
> busy.

Note, that VT-switching works in interrupt context normally. This gives an
additional complication for that task, as there is no notion of a currently
running application or swappable memory.
That is to say: You do nt have access to swappable memory at the time
a VT switch comes in. This requires trickery like the ramdaemon approach.

> Isn't it safe to assume that if someone puts a 56 MB video card
> into a machine, that that machine isn't a tiny underpowered box?
> That it might have a high speed (20 MB/sec) SCSI, or 64-128 MB ram,
> or something else along those lines?

That does not help. There is an additional problem: The graphics hardware is
not designed to be preempted.

It is very very hard to give reasonable performance _and_ unimpeded 
VT-switching on standard PC hardware. It is easily possible with hardware
designed for it, but very hard otherwise.

This is, why the common way to implement graphical consoles is to turn off
automatic VT-switching _WHEN_A_GRAPHICAL_CONSOLE_IS_ACTIVE_.

This does NOT mean, that you cannot VT-switch. It only means that the
application has to consent to it. And if that system is used, the
application can still save the buffers away, if needed. And it can be smart
about it.

> Or perhaps the user is willing to wait the 10 seconds that it takes.

It takes 0.5 seconds to redraw. I am not sure it makes sense to have 2x 10
seconds for a swapin/out compared to that.

> Or perhaps the frame buffer isn't in use on the other VT, and it
> just takes a text restore

This takes a reprogram of the graphics card, thus eventually destroying all
framebuffer content, thus having to wait the 10 seconds.

> 1. You want the user space program to be able to tell that the
> terminal is being switched, and give it the choice of what to do.

Yes. This is the only reasonable way to go. We _should_ have a forced switch
however for apps that do not support that properly. This will have them
_loose_ their fb-content, which is no issue for 99% of the apps.

> 2. You expect that the user program will decide whether to save
> the FB or redraw it later (no backing store).

Yes.

> 3. You expect the user program to be able to respond -- that
> it will not be suspended, waiting for a non-responsive NFS
> server, etc, and that the user program has no way to tell the
> system "It's ok to throw the screen away on a terminal change"
> ahead of time.

The problem is not only the screen contents, but also the card state.

A simple put-to-sleep/wake-up scheme will not work for graphics cards,
as their state usually is dependent no only on a regdump, but as well on
the _history_ of how regs were accessed.

> 4. History note. Microsoft windows was originally designed on a
> non-backing store model, when graphics cards were 1/2 to 1 MB,
> systems were 640K, programs were small, and bringing a program into
> memory to redraw was the best way to go. Today, we have 4-8 MB
> video, and redrawing a screen means bringing 2-4 large, multi-
> megabyte programs into memory, having each of them do all of their
> drawing again, etc.

> We went from a system where having the programs redraw was best
> to having a system where a straight memory dump/restore would be
> best. Programs didn't change.

You are giving a very good reason, why we should do the opposite and use
a non-backing-store model.

Linux programs are small. I have seen no program that is as large as 56MB.
Even if we stay at common fb sizes of 4MB, almost all Linux programs,
except maybe Netscape (which doesn't count IMHO anyway) stay below that,
plus using demand-paging, we will need to load only a very small fraction of
it.

Moreover VT-switching does happen pretty rarely, and there is always only
one program redrawing, which you will want to be paged in anyway, as you are
very probably going to work with it - right ?

> 	1. I need the FB stored, swap it out and restore it for me. 56 MB

I had once advocated that as well. The problem really is the graphics state
regs. It is simply impossible to switch VTs without adverse effects on the
display without application consent.

> 	3. I just need the graphics mode settings restored for me.

That is a must anyway. It's the basics of virtualization at all.
The application should basically be able to continue quite as if nothing
happened at the switch-back, eventually redrawing. 

The problem is the switch-from, which can leave the card in an inconsistent
state. I'll make an example:

We have a nice graphics card, that we can mmap accel registers for.

Now app 1 wants to draw a rectangle:
- set REG_COLOR,color
- set REG_PARMS,x
- set REG_PARMS,y 
- set REG_PARMS,width
- set REG_PARMS,height
- set REC_COMMAND,EXECUTE_DRAW_RECTANGLE

Now imagine we have two such apps. One drawing red,1,1,100,100 the
other blue,200,200,10,10.

Now if we preempt the first one e.g. after the y parameter and then go into
the second one, we are screwed. The sequence to the regs will be:

COLOR red     | blue                    |
PARMS     1 1 |      200 200 10 10      | 100 100
COMMD         |                    RECT |         rect

You probably see the problem. Depending on how the card works, this will
cause major or minor havoc. The best thing would be, if the card has
"stack-behaviour" internally, where nothing bad would happen in that example
(It would, though if the second process would be scheduled again earlier).
If it uses FIFO style, It will first draw a red rectangle of 1,1,200,200
on the screen of app 2, which is quite wrong, and later a blue one of
10,10,100,100 on that of app 1, which is also wrong.

If we tried to save/restore the content of the "parms" "register", that
would just make matters worse.

You simply need application cooperation for VT-switching.

> A program can tell how easy it will be to recreate the display or
> not. The OS can tell, based on that ease, and the system specifics,
> whether it needs to save or not. The user program cannot be assumed
> to be able to make that determination.

It is never reasonable to save, unless the calculation of the redrawing is
very very slow. Anything that shows moving images will never be quicker to 
reload than to redraw intelligently. The application has the best data
available of whether and what to swap.

95% of the apps can redraw quickly. The other 5% can think about what they
need to save and simply do so. Especially with LibGGI this is trivial.
ggiCrossblit(vis,0,0,width,height,memvisual,0,0);

CU, Andy

-- 
= Andreas Beck                    |  Email :  <andreas.beck@ggi-project.org> =


Index: [thread] [date] [subject] [author]