This document pertains to the NeXTSTEP operating system, which is no longer a supported product of Apple Computer. This information is provided only as a convenience to our customers who have not yet upgraded their systems, and may not apply to OPENSTEP, WebObjects, or any other product of Apple Enterprise Software. Some questions in this Q&A document may not apply to version 3.3 or to any given specific version of NeXTSTEP.
Q: Why does NeXTSTEP 3.3 show poor graphics performance when I use a PCI video card?
A: In order to work around a hardware problem involving the Intel 824x0 PCI host-bus chip, the Intel824X0 PCI driver supplied with NEXTSTEP version 3.3 disables Write-Posting--a performance feature--in this controller chip. Without this workaround, users of bus-mastering drivers, including PCI SCSI and ethernet drivers, would experience intermittent system crashes with possible resulting data corruption. Note that this is a problem with the controller chip, not with NEXTSTEP. Version 3.3 of NEXTSTEP simply implements a workaround to this operating-system independent hardware problem.
The tradeoff for this increased robustness is lower performance than would be possible if write-posting could be safely enabled. Given the potential for data loss which exists with this chip, NeXT strongly recommends that users of PCI bus mastering devices accept the lower performance this workaround causes. Though it is possible to achieve higher performance even for bus-mastering devices by disabling the Intel824X0 driver, this is not a recommended or supported system configuration because of the increased risk of system crashes and data loss.
If the PCI bus is only being used for video, however, and no PCI bus-mastering devices are installed, NeXT knows of no adverse impact which will result from removing the Intel824x0 driver, and substantially improved video performance may result. However, note that use of PCI devices with this driver disabled is not supported even for non bus-mastering devices and, if done, is at your own risk. In addition, if you chose to disable the Intel824x0 driver, you must insure that if PCI bus-mastering devices are later installed in the system the Intel824x0 driver will be re-enabled at that time or an increased risk of data corruption will result.
The PCI chipset bug only occurs in the A0 stepping of the Intel 824x0 series of chipset. This bug was corrected in the A1 stepping of the chip which went into production around September of '94. Unfortunately, this chip is usually soldered directly to the motherboard; therefore, you should contact your PC vendor if you require servicing.
A new driver has been released for the Intel 824x0 chipset which automatically recognizes the flawed 824x0 chipset on your system. It will only disable write-posting if it detects a flawed chipset. See NeXTanswers document 1790_Intel_824X0_PCI_Chipset_Driver_Overview.rtf for more information.
Q: Will either of your color products (NeXTdimension or NeXTstation Color) support color lookup tables? What about NEXTSTEP for Intel Processors?
A: The NeXTdimension supports full 32-bit ``real'' color--approximately 16 million colors can be displayed at any given pixel. The hardware is configured as 8 bits of each color component: red, green, blue and alpha. The alpha component may be used to record relative transparency for any given pixel.
The NeXTstation Color supports 16-bit color--4 bits each: R, G, B and alpha. This allows 4096 different colors to be displayed at any given pixel. In addition, the WindowServer uses dithering to make the images look more realistic. Neither machine supports color-mapped color.
NEXTSTEP for Intel Processors follows the same guidelines as the black hardware does. At this time, the maximum color resolution supported on Intel hardware is 16-bit color. (Release 3.0)
Q: When using the color machines can I turn the color ``off'' (so that it will run in 2-bit mode) to increase performance?
A: No. The i860 processor on the NeXTdimension board speeds up the color operations to give performance equivalent to a monochrome cube with a MegaPixel display. (Release 3.0)
Q: If I read a TIFF file into the NeXT which contains a color palette, does the sofware understand the palette correctly or does it ignore that information as in 1.0? A: Although NeXTStep does not, in general, support palette-based imaging, there is limited support in Release 2.0 and later. TIFF files which contain 8-bit palettes of 24-bit color values are read in and converted to 32-bit color images; other palettes-based images are not understood. (Release 3.0)
Q: My timed entry does not seem to be occurring when I use DPSPeekEvent().
A: Timed entries are executed only when getNextEvent: or DPSGetEvent() is called. They will not be "triggered" by DPSPeekEvent(). A timed entry does not interrupt the current process; it will be executed the next time between events.
Q: How much movement triggers a mouse-moved event?
A: Any change in the screen coordinates of the cursor (even one pixel) creates a mouse-moved or mouse-dragged event. But see the documentation for DPSSetTracking(); event coalescing may be influencing your perception of how this works.
Q: How can I send a PostScript file directly to the WindowServer?
A: You should usually use NXImage. However, if that's not appropriate in your situation use the following code snippet. Warning: Do not use PSRun(), because your application may not work with -NXHost since the file you specified may not exist on the other machine.
int SendFileToPS(const char *fileName)
{
NXStream *st;
char *addr;
int len, maxlen;
Q: If I fill a circle using PostScript, then stroke it with a different color using the default linewidth of 1, not all of the pixels get covered at the edges. I thought a linewidth of 0 could cause this problem, but a linewidth of 1 shouldn't! Drawing applications such as Draw.app demonstrate the same problem: Create a circle filled with black and outlined with white, using a line thickness of 1. You'll notice stray black pixels around the edges. What's going on here? How can this problem be avoided?
A: PostScript scanning should normally cause any non-zero linewidth to eliminate the stray pixels in the above scenarios. However, PostScript also has a "stroke adjustment" feature which, when turned on, tries to create lines of uniform thickness on low-resolution output devices (such as displays). This causes the above problems to appear--a line of width 1 drawn on a path might fail to fully cover the pixels on the edges of a fill done on the same path. By default, stroke adjustment is enabled for displays and disabled for printers.
Given this, developers of drawing and other applications should look into turning off stroke adjustment when displaying graphic objects created/manipulated by users in documents. This can be accomplished with
false setstrokeadjust
The previous strokeadjust value should be restored once the graphics are drawn; this can be done via gsave/grestore or by explicitly remembering the previous value:
You can read more about the setstrokeadjust and currentstrokeadjust operators in section 6.5 of the PostScript Language Reference Manual 2nd edition (the level 2 red book).
Q: While drawing in a window performing animation, I also want to be able to accept events, such as a key stroke. But the animation must continue if no event occurs. Timed entries are not a solution because the animation must be continuous and smooth and is being drawn too fast for the granularity that timed entries provide.
A: Use DPSPeekEvent() to catch events while not blocking.
Q: There seems to be a problem with compositing. Using the CompositeLab in /NextDeveloper/Examples on a 2-bit screen, set the following parameters and use the SOVER operation:
Source gray=1 opacity=0.3 (white mostly transparent source)
Dest gray=0.8 opacity=1 (light gray opaque destination)
You will see that the result is the same color as the destination. Thus the 30% coverage from the white source is having no effect at all! Now change the source opacity to 0; this causes no change in the result. What's going on here?
A: The behavior is correct. Assume the case where the source is all white and is 33% opaque. Say the destination is 66% white and opaque. (This assures that we are using exact pixel values with no dithering.) The SOVER formula is:
which for our case reduces to
1 * 1/3 + 2/3 * 2/3 = 7/9
which is rounded to 6/9, given that compositing only works on a per-pixel basis. Changing the opacity all the way down to 0 simply changes the resulting pixel to 6/9, thus no color change occurs.
Now using the parameters in the question
Source gray=1 opacity=0.3 (white mostly transparent source)
Dest gray=0.8 opacity=1 (light gray opaque destination)
We see that there are some source pixels with opacity of 0, and a few with opacity of 1/3. Source color is 1 in all cases. There are some dest pixels with gray of 1 and others with 2/3; opacity is 1 in both cases.
Thus every resulting pixel is computed from one of four formulas:
Thus the result is equal to the dest color in all cases.
If you wish to composite in a more accurate fashion, you can use 8-bit deep grayscale windows. However, this will use up a lot more memory and is probably not worth it.
Q: What do errors like the following mean?
DPS client library error while writing to connection DPS context c18c data -102
A: The error reporter is basically printing out the elements of data passed in the exception initiated by NX_RAISE(). Usually the header file that defines the exception codes tells what data items are also passed. Since this is a DPS client library error, we look through <dpsclient/dpsclient.h> for a write error exception code:
dps_err_select, dps_err_read and dps_err_write signal communication
errors in the connection. The OS return code is passed in arg1;
arg2 is unused.
In interpreting the dpsclient error message, the first data item is always the DPS context in which the error occurred. The second item corresponds to arg1. Note that any arg2 referred to in the exception code description doesn't get passed from the DPSErrorFunc into the NXRaise exception scheme. Luckily, there aren't any DPS errors where arg2 is essential info.
So, for the above error, the c18c is a DPS context and the -102 is a Mach return code. The Mach codes happen to be defined in <sys/message.h>. By the way, negative numbers in about the -100 to -1000 range tend to be Mach error return codes.
Q: I'm using NXImage to display a PostScript or TIFF file. When I display it on a color system, the image doesn't look right--there are large black areas.
A: Probably your image has transparency in it. The image was rendered into an NXImage and then composited onto the screen using NX_COPY. Since NX_COPY produces an exact copy of the bits from the source, transparent areas in the NXImage were copied onto the screen. On the monochrome MegaPixel display, these transparent areas expose to white, to emulate the way a sheet of paper might behave. NeXT's color devices act more like video devices, and they expose to black.
In order to avoid exposing the underlying device's representation of transparent, you should fill in the background and composite the NXImage using NX_SOVER:
Q: In my application I am reading in an NXImage. A nil is never returned, even if I read in a bogus file. Is this a bug? Here is my code:
id myNXImage = [NXImage alloc];
if ([myNXImage initFromFile: "dummyName.tiff"] == nil)
{
/* this is never getting called! */
fprintf(stderr,"dummyName.tiff doesn't exist!\n");
}
A: This is not a bug. The initFromFile: method is lazy and does not catch all the errors that might happen when loading an image. Your application should be prepared to check for errors later on down the line either through delegation or by checking the composite: or lockFocus return values. If you wish, you can force the image to be rendered immediately:
id myNXImage = [[NXImage alloc] initFromFile: filename];
if ([myNXImage lockFocus])
[image unlockFocus];
else
fprintf(stderr,"%s doesn't exist\n", filename);
Although this behavior might seem confusing it allows for more optimal performance: the image isn't rendered into the cache until it is needed. Rendering a large or complex file can be slow--particularly for a complex EPS file.
Note: Another good approach for determining whether an image can be successfully rendered is the NXImage delegate method imageDidNotDraw:inRect:. If you have assigned a delegate for the image and implemented this method, it gets called when compositing fails for whatever reason. See the documentation on NXImage for more information about this method. Also note that this method of delegation may be the only way to catch a drawing error for an image which is being "handed" to the AppKit--an icon on a button, for example.
There is a known bug in Release 2 where imageDidNotDraw:inRect: fails to be called when encountering an error from within the method composite:toPoint:. This bug can be avoided by using the NXImage method composite:fromRect:toPoint:. This bug has been fixed in Release 3.
Valid for 2.0, 3.0
Q: I'm writing an application which can open either EPS or TIFF images using the NXImage class. How can I determine what kind of file I've opened without hacking the file name? A: You can use the isKindOf: method from the Object class:
id myNXImage, myImageRep;
myNXImage = [[NXImage alloc] initFromFile: fileName];
myImageRep = [myNXImage lastRepresentation];
if ([myImageRep isKindOf: [NXBitmapImageRep class]])
{
/* then I'm a TIFF file! */
}
else if ([myImageRep isKindOf: [NXEPSImageRep class]])
{
/* then I'm an EPS file! */
}
The key here is that the NXImage instance itself does not understand EPS or TIFF information per se. NXImage manages the representation classes (one NXImage may have multiple representations) which do understand EPS and TIFF information.
Of course, it is reasonable to extract this information from the fileName as well. The following code snippet can be used to do this:
char *fileType = rindex(fileName, '.');
if (!fileType)
{
/* then I'm not an appropriate file! */
}
else if (!strcmp(fileType, ".tiff"))
{
/* then I'm a TIFF file! */;
}
else if (!strcmp(fileType, ".eps"))
{
/* then I'm an EPS file! */
}
Valid for 2.0, 3.0
Q: My application is a simple paint program. The user opens a TIFF image, then scribbles into it, and finally saves the new image as a TIFF file. However, the changes made by the user aren't saved into the TIFF file--it contains the original image. Why?
A: This occurs if you open the TIFF file like this:
image = [[NXImage alloc] initFromFile:fileName];
NXImage will have two representations--the file, and the cache. NXImage will treat the cache as a transitory image, and the file as its "best representation." The cache is the off-screen window to which the user's scribbles are drawn. When asked to write out the image, NXImage writes out its best representation of the image--which is the actual TIFF file residing on disk--thus ignoring completely the changes made to the image. To get around this you must fake out NXImage by forcing the cache to be the best representation of the image.
The following code snippet illustrates what you must do:
/* When the user opens the image */
rep = [[NXBitmapImageRep alloc] initFromFile:fileName];
[rep getSize:&imageSize];
This code sample initialized an NXBitmapImageRep from the file containing the opened image. The NXImage is initialized from this representation. Now the NXImage does not have a file which can serve as its best representation--it only has the cache. Thus when you tell NXImage to writeTIFF: the cache with all of the user's scribbles is written out properly.
Valid for 1.0, 2.0, 3.0
Q: I have allocated an instance of NXImage and an instance of NXBitmapImageRep. I then tell the NXImage to use the rep instance, like this:
NXRect originalSize;
id myRep, myImage;
int bitsPerPixel;
Then, later in my application I query the rep instance (as follows) and the query fails because myRep is nil! Why is this?
bitsPerPixel = [myRep bitsPerPixel]; /* this fails -- myRep is nil ! */
A: This is not a bug. Once you have ``given'' the NXBitmapImageRep instance to NXImage (by calling useRepresentation:) then the NXImage "owns" that rep and can do what it wishes with it. (This is also true for any class of rep instance, not just NXBitmapImageRep) What the NXimage typically does is to turn that representation into an NXCachedImageRep and then free the NXBitmapImageRep. To prevent this behavior do a setDataRetained:YES on the NXImage instance. The setDataRetained: method defaults to NO. The NXImage then does not free the NXBitmapImageRep. For example, to correct the above example, add the following line prior to calling useRepresentation:
[myImage setDataRetained:YES];
Valid for 2.0, 3.0
Q: I want my application to periodically and automatically perform an operation. For instance, I want to save a backup file every 5 minutes, or I want to create a blinking cursor. How do I go about doing that? Should I use timer events?
A: What you actually want is a timed entry. The Display PostScript routines DPSAddTimedEntry() and DPSRemoveTimedEntry() are used to start and stop timed entries. These functions are described in the Next Developer Documentation in the DPS section. You install a timed entry to run at a specific interval and give a handler function which will be called at each interval. What actually happens is that for each cycle of the event loop, the application checks to see if there are any timed entries which are due. If so, then the handler function is called, thus "cutting" in line in front of any other events. Timed entries are ``coalesced'' in a sense because the next occurrence of a timed entry is set when the current entry is processed. There is never more than one occurrence of a timed entry waiting to run and since timed entries aren't inserted into the event queue, you don't have to worry about timed entries overflowing the queue.
Timer events are not the correct thing to use in this instance, because they are intended to be used in modal loops. Each application has an event queue, where pending events are waiting to be processed. Each application has a main loop which simply polls for events, and then reacts accordingly. Modal loops allow you to create a loop which is secondary to the application loop, and supersedes it for a short time. You use modal loops when you have received one event and are then waiting for another which then terminates the modal loop. The modal loop must ensure that it will continue to get a continuous stream of events, and it does this with timer events. You use timer events in a modal loop in case the user is doing something that is not generating events (such as holding down the mouse button, and not moving it).
There are several programming examples that use timed entries. If you want a simple example of a timed entry, see the BusyBox example under /NextDeveloper/Examples/AppKit. The ClockView class there uses a timed entry to animate the movement of the hands. (Tip: It can be very helpful to drag /NextDeveloper/Examples into your Digital Librarian and index it. Another useful Librarian target is /usr/include.)
Valid for 1.0, 2.0, 3.0
Q: I've installed a timed entry to run at a specific interval in my application. I receive a timed entry and my application goes off to process it. For some reason the processing takes longer than the interval between the timed entries. The result is that a second entry happens before the application finishes processing the previous entry. Will the second entry be queued or does it interrupt?
For Releases 1 and 2: A: Neither. The timed entry interval specifies the time that passes between the time the timed entry function returns and the time it is called again. Thus, if you have an interval of 10 seconds, and your timed entry function takes 5 seconds to execute, your function is called every 15 seconds.
Sometimes this might be a bad thing; it is then the responsibility of your timed entry function to adjust the interval. For instance, in the ClockView class (/NextDeveloper/Examples/Clock under 1.0 and /NextDeveloper/Examples/BusyBox under 2.0), the function that gets called at the top of the minute stops the timed entry and starts it again with an new interval equivalent to the number of seconds left to the next top of the minute. This prevents the clock from ``slowing down'' and missing minutes when the system is slow. (It however does not bother with this in the seconds mode; missing a second or two here and there is okay.)
The Animator class shows how to create an ``adjusting'' timed entry, which is a bit more complex. The Animator class is used by /NextDeveloper/Examples/BreakApp under Release 1 and /NextDeveloper/Examples/ToolInspector under Release 2.
For Release 3: A: In Release 3, the system tries to call the function with the requested periodicity, regardless of how long the function takes to execute. However, if the function takes longer than the period to execute, the timed entries do not try to "catch up" to make up for the missed call(s).
In Releases 1 and 2, timed entries were accurate to only 15 ms. In Release 3, they are accurate to ~1 ms.