Index: [thread] [date] [subject] [author]
  From: Marcus Sundberg <mackan@stacken.kth.se>
  To  : ggi-develop@eskimo.com
  Date: Thu, 10 Sep 1998 01:31:54 +0000

Re: SYNC mode should go

mentalg@geocities.com wrote:
> 
> I've been beating my brains out over the past week or so trying to come up
> with a decent thread-friendly locking scheme that would be consistent with
> and with USE_THREADS and didn't break the signal implementation of mansync
> (or vice versa), and while I sort of worked out an API with both a
> signal-friendly intlock/sigmask locking implementation and a pthreads
> implementation, it's still a pretty ugly and inconsistent solution.
> 
> Honestly, we should really drop SYNC mode...
> 
> Why is it there?  To allow programmers who picked up odd habits in DOS to
> feel more at home?
> 
> We take a performance hit/bloat increase for it, although the shared mansync
> stuff has mitigated the bloat a lot, and the performance hit is only if you
> actually USE it...
> 
> It really really makes any sort of consistent/correct locking scheme really
> painful to design/implement if you use the mansync signal implementation.
> 
> The signal mansync implementation also screws with your SIGALRM handlers.
> Mansync should be handled by the application if it's really that important
> (if you _must_, it would be a good candidate for a separate utility library,
> but just provide a function to get called from the app's own signal handler
> in that case). I really dislike libraries taking over signals, especially to
> implement something I'm not going to use anyway.
> 
> When porting old DOS apps to Linux, it works much better to stick calls to a
> flush() function in appropriate places in the code, rather than take the
> (somewhat considerable on some targets) performance hits of madly flushing
> to give the illusion that the framebuffer is actually attached to the video
> HW directly like it was in DOS.  Doing the little bit of rewriting necessary
> to get the flush()es in there needs to be done anyway, as it also forces you
> to clean up other issues in the process.  It also yields better performance
> from the get-go, especially as you'll have to be emulating a lot of other
> stuff anyway.  I speak from experience here.
> 
> No matter which way you hash it, SYNC mode just encourages/requires bad
> code.  It should go.  LibGGI (and certainly some of the underlying target
> implementations) doesn't seem to really be async-signal-safe anyway, so the
> signal-based mansync is kinda dangerous...

Yes, I've always wanted to do this. Andy could we PLEASE do this now.

Actually we should REALLY go all the way and introduce framebuffer
lock/unlock calls into libggi. I've suggested this before as it would
eliminate the problem with accessing accel engine/framebuffer at the
same time.

And there are other reasons for this too:
DirectX depends on a lock/unlock system. If libggi don't have this we'd
have to use a mansync hack here, and in that case we can forget about
porting to win32 right now as noone would ever use it.
The glide API also uses a lock/unlock scheme. If libggi had the same
scheme we wouldn't have to use mansync here, we could use direct access
to the framebuffer. This would make a 3DFX card really usable as a
2D card, especially if GGI Console could brings us user-level
consoles. (Hey Teunis, what happend to that 6-consoles-on-a-cube
thingy ;-)
There are probaly other API's that uses lock/unlock calls that we
might want to have libggi running on.

//Marcus
-- 
-------------------------------+------------------------------------
        Marcus Sundberg        | http://www.stacken.kth.se/~mackan/
 Royal Institute of Technology |       Phone: +46 707 295404
       Stockholm, Sweden       |   E-Mail: mackan@stacken.kth.se

Index: [thread] [date] [subject] [author]