CA2087501A1 - Computer graphics display and system with shadow generation - Google Patents

Computer graphics display and system with shadow generation

Info

Publication number
CA2087501A1
CA2087501A1 CA002087501A CA2087501A CA2087501A1 CA 2087501 A1 CA2087501 A1 CA 2087501A1 CA 002087501 A CA002087501 A CA 002087501A CA 2087501 A CA2087501 A CA 2087501A CA 2087501 A1 CA2087501 A1 CA 2087501A1
Authority
CA
Canada
Prior art keywords
pixel
buffer
light source
depth value
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002087501A
Other languages
French (fr)
Inventor
Frederick James Scheibl
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2087501A1 publication Critical patent/CA2087501A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Abstract

COMPUTER GRAPHICS DISPLAY METHOD AND

SYSTEM WITH SHADOW GENERATION

Abstract A computer graphics display system and method are described for rendering a scene formed of at least one geometric primitive as a pixel image having shadows produced by at least one defined light source. Multiple passes are made through the primitive data structure for each light source capable of producing shadows in the scene to be rendered.
In a first pass, the scene is rendered to a frame buffer in the usual way, but using only the ambient component of the light specification, and a first Z-buffer is updated with the viewpoint Z value. For each defined light source (i), two additional passes (PASS 2i & 2i+1) through the data structure are required. In the first of these, a transformation matrix is set up in such a way that the viewpoint is moved to the position of the light source. The scene is then rendered in the usual way except that the frame buffer is not updated, and a second Z-buffer (light source view Z-buffer) is used instead of the first Z-buffer.
In the next pass, the shaded image and shadows are generated in parallel using the content of the first and second Z-buffers. When the frame buffer is updated, it is accomplished in a cumulative manner with each computed intensity value due to a specific light source being added to any value already stored there. In this way, intensities resultant from each of multiple light sources are accumulated on the image.

Description

~ KI9-91~040 2~8750~

COMPUTER GRAPHI~,S DISPLAY METHOD
AND SYSTEM WITH SHADOW GENERATI~N

Technical FieLd The present invention relate~ generally to computer graphics generation and display, and more particularly, to a method and system for rendering a scene formed of at least one geometric primitive as a pixel image having a shadow(s) cast by another geometric primitive which is blocking the illumination of one or more defined :light source(s~

Back~Lr_und Art The quality of computer graphics display system images and the time required to process these images have been steadily improving with each new generation of graphics display equipment. Continued improvement is clearly beneficial since images which approach photographic quality and project a real world paradigm are much easier to interpret, while high ~ speed processing allows more productive user interaction with displayed images. For example~ currently available graphics display systems are able to provide highly interactive images with shading of displayed objects.
Screen shading accurately models the surface effects of - ;
different types of lighting sources on displayed objects and the shading effects provide visible cues as to the ~hape and contou~rs of the ob~ects. Much of the proces~ing required to produce shading effects is done in hardware which interpolate~ colors between polygon vertices, e.g., using a well known technique such as Gouraud Shading.
' Other visual cues such as surface textures, mation blur, object reflections and shadows can also be added to an image to further improve its quality. However, these enhancements traditionally require extensive software processing cycles, which make the graphics system less user interactive. The present invention specifically addresses improving the 2 0 8 7 ~ l) 1 processing time for one of these enhancements, nam~ly, generating ~hadows cast when ob~ect# bloclc the illumination of one or more definecl llcJhl ~otl~ces ~n a ~cene ~o be rendered.

Typical computer graphics systems are capable of determining the surface color of an object at a yiven pixel and whether a pi.xel is visible from a defined vieWpoint, and therefore whether the pixel belongs in the associated frame buffer, To accomplish this, it is necessary to perform an interpolation of R,G,B and Z in X,Y space (six-axis interpolation). Since any information as to the positional relationship between light sources and objects is discarded by the geometry process:ing engine of the graphics system after the lighting calculation step, it is impossible for the subsequent raster processing sllb-system to determine if an object casts a shadow on another object to appear in the pixel image.

As described by P. Bergeron in an TEEE September, 1986 publication entitled "A General Version of Crow's Shadow Volumes" (pp. 17-28), there are presently five known classes of shadow generation algorithms. Namely:

(l) Shadow computation durlng display (A. Appel, "Some Techniques for Shading Machine Renderings of Solids," Proc.
Spring Joint Computer Confe~ence, Thompson Books, Washington, DC, pp. 37-45, 1968);

(2) Polygon shaclow generation based on clipping transformation (Atherton and Weiler, "Polygon Shadow Generation," Computer Graphics, (Proc. SIGGRAPH 78) Vol. 12, No. 3, pp. 275-281, July 1978;
(3) Shadow volumes (F. Crow, "Shadow Algorithms for Computer Graphics," Computer Graphics, (Proc. SIGGRAPH 77) Vol. 11, No. 3, pp. 242-248, July 1977;
(4) Z-buffer shadow computation (L. Williams, "Casting Curved Shadows on Curved Surfaces," Computer Graphics, ,~ .

`2087~1 (Proc. SIGGRAPH 78), Vol. 1~, ~lo. :~, pp. 270-274, July 1978;
and (5) Ray trac.incJ ~see Whitted, "An lmprovec1 Illumination Model for Shaded Dlsplay," ~omm. ~CM, Vol. 23, ~Jo 6, pp, 343-349, June 1980).

Of the above techniques, only the Z~buffer algorithm approach described by L. Williams opera-tes in image space and allows a scene of any complexit~ to be handled. Briefly described, the Williams shadow generation method utilizes two passes throuyh a Z-buffer algorithm, one for a viewer and one for a light source. Determination of whether a surface is to be shadowed is made by using image-precision calculations.

The Williams algorithm beyins by calculating and storing just the Z-buffer values for an image from the perspective of the light source, wherein increasing values represent increasing distance. Next, the Z-buffer and the image are calculated from the viewpo:inl o~ an observer using a Z-buffer algorithm with -the following modification.
Whenever a pixel is determined to be visible, its object-precision coordinates ln the observer's view (X~o~Y~OZ~o) are each computationa:lly transformed into coordinates in the light source's view (X~O~Y~o~Z~o)~ The transformed coordinates X'0, a~d Y'0 are used to select a value ZL in the light .source's Z-b~lffer to be compared with the transformed va].ue 2lo If ZJ is closer to the light source than is Z'0 then there is somet.hing bloc)cing the light from that point, and -the pixel is shaded as beiny in shadow; otherwise, the pOillt ,iS visi.ble from the liyht and it is shaded as being lighted. This computationally intensive approach re~uires extensive transformation calculations between the light source and viewpoint view coordinates for each pixel. Because of these computations, user interac-tiveness with the computer graphics system is less than optimal, and degrades significantly with the addition of a second or more light source.

: ~ " ~, 2087~
Thus, a need continues to exist i~ the computer yraphics processing art for a new shadow generation technigue which solves the shadow generatlon prob]em in image space and im~roves upon the performarlce o prevlous ~hadowing approaches.

Dlsclosure of Invent30n Briefly described, this inventlon provides a computer graphics display method and system for renclering a scene formed of at least one yeometric primitive as a pixel image with shadow(s) resulting from other object blocking illumination from one or more defined light sources. The graphics display system includes a first Z-buffer, a second Z-buffer and a frame buffer, each of which has storage locations corresponding to the pixels which are to form the image. The image has a defined viewpoint and at least one light source, such that each primitive is represented by one or more pixels (X,Y) relative to the defined viewpoint and one or more pixels (Xs,Ys) relative to the defined light source. In one embodiment, the image rendering method includes the steps of: for each pixel (%,Y), generating a first depth value (Z) representative of the depth from the defined viewpoint to tha~ pixel and saviny the first depth value in -the corresponding pixel location of the first Z-buffer if the first depth value (Z) is visible to the defined viewpoint in comparison with the depth value stored in the corresponding pixel location of the first Z-buffer;
for each pixel (Xs,Ys), generating a second depth value (Zs) representative of the depth of the pixel to the defined light source and saving the seconcl depth value (Zs) in the corresponding pixel location of the ~econd Z-bufer if the second depth value (Zs) is visible to the light source in comparison with a depth value stored in the corresponding pixel location of the second Z-bufer; and for each pixel forming the pixel image, generatillg, using only the defined light source, a representative coLor value (Rs,Gs,Bs) and adding to a corresponding (X,Y) pixel ]ocation of the frame buffer the representative color val.ue (Rs,Gs,Bs) for the pixel if the corresponding depth value in the first Z-buffer identifies a pixel visible to the viewpoint and if the 2087~1 corresponding depth va]ue :in the seond 7,-buffer identi~ies a pixel visible to the ~ight ,source. Further gpecific method details arld enhanc~ement~ are also clesc~ibed and claimed.

In another aspect, the invention provides a computer graphics display sy~tem with shadow yeneration ~apability for rendering a scene formed of at least one geometric primitive as a pixel image. The scene to be rendered includes a defined ambient lighting value, viewpoint and light source such that each primitive has one or more pixel (X,Y) associated therewith in a irst, viewpoint coordinate space, and one or more pixel (Xs,Ys) associated therewith in a second, light source coorclinate space. The graphics display system includes a geometry processiny engine for converting the at ]east one geometric primitive into device X,Y,Z coordinates relative to -the viewpoint and device Xs,Ys,Zs coordinates relative to the light source. A frame buffer, first Z-bufer and second Z-buffer are also provided, each of which has storage locations corresponding to the pixels forming the image of the rendered scene. A
raster processing ertgine is couple<l to receive the output of the geometry processing engine, and is also coupled to the frame buffer, f:ir,st Z-buffer and second Z-buffer for controlllng the ætorage of values thereto. The raster processing engine includes: first storage means which directs the~ storage of pixel clepth values in the first Z-buffer whenever a determined pixel depth value (Z) is equal or closer to -the viewpoint than a depth value already stored in the corresponcliny pixel location of the first Z-buffer; second storage means which directs the storage of - second pixel clepth values (Zs) in the second Z-buffer whenever a determined depth value (Zs) is equal or closer to the light source than a depth value already stored in the corresponding pixel location of the second Z-buffer;
generating means or producing a represelltative color value (Rs,Gs,Bs) for each pixel forming the lmaye using only the defined lighting .source; ancl accumulatirlcJ means for adding to the corresponding (X,Y) pixe] location of the frame bufer the generated representative color value (Rs,Gs,Bs) ' , : .: : . . , . ,:

; V 1 for the pixel if the correspond:in~J dep-th value in the fir~t Z-buffer identif.ies a pixel visible to the viewpoint ancl the corresponcling depkh vàlue :Ln the ~ec~orld %~bufer idén~iie~
a pixel visible to the light ~otlr(,e, To summarize, the present invetltlon compri~e~ a shadow generation approach which a~vant,a~eously enhances processiny times in comparison with known shadow rendsriny techniques, so that user i.nteractiveness with the graphics system is optimized, The shadow generation approach, which is substantially i.mplemented in hardware, maintains a mappiny between a viewpoint and a light .source by interpolatiny the respective Z-buffer addresses in-to dual Z~buffers during the span generation process, Since shadows are useful for visuali.zing the spatial relationship among objects, use of the invention provides another .slrJnificant visual cue to interpreting a rendered scene, ~lhile still maintaining low computational overhead.

Shadow processing pursuant to the invention has a performance comparable to the non-shadowed rendering o Gouraud shaded polygons w.it.h hidderl s~lrfaces removed. Novel characteristics inclucle the use o~ n.ine-axis interpolation instead of the traditional six-axis, the accommodation of multiple light sources through a simple techniclue of multiple passes over the scene, and correction of depth buffer registration errors by Z disp].acement in the direction of a light source view gracl.ient in Xs,Ys space ~(i.e., light source view coordinates).

B,rief Descri~t.iotl of D.rawin~

These and other ob~ects, advantac,~es and features of the present invention Will be more readily under~tood from the following detailed descriptioll o certain preferred embodiments of the present inverltioll, when considered in conjunction with the accompanying drawings in which:

' ~ ,. ' . , `` KI9-91-040 2087~01 FIG. 1 is a yeneral block diayram repre~entation of an interactive computer yraphicæ display system to incorporate the presen-t invention;

FIG. 2 is a bl~ck d.iagram repr,eser1tation of a conventi~nal computer graphics display system;

FIG. 3 is a block diagram represent,atLon of a computer graphics display system pursuan.t, to the present invention;

FIGS 4A, 4B, 4C1 ~ 4C2 compriæe a flowchart representation of one embodiment of pixel image processing pursuant to the computer graphics display system of FIG. 3; and FIG. 5 graphically illustrates a potential problem arising with use of the computer graphics lisplay system of FIG. 3, which is addressed by the pixel i,maye processing method of FIG.S 4A-4C2.

_e t_Mode For Carrying Out The I vention Reference is now made to t~e clrawi.ngs in which the same reference numbers are used throuyllout the different figures to designate the same or similar components.
, In FIG. 1, a general block diayra1n representation of the interaction between a graphics system 10 and a host computer 12 is shown. Computer 12 includes an application program 14 and: an application data structure 16. Application program 14 stores into and retrieves data from application data structure 16 and sends graph:ics commands to graphics system 10. Data structure ].6 holds descriptions of real or abstract fi~ures or objects whi.ch are to appear on a graphics monitor (not shown) associated with system 10. An object description stored within data structure 16 comprises geometric coord:Lnate data whlch deflnes the shape of components of the object, object attributes, and connectivity relationships and positioning data that define how the components fit -together. For example, objects are ~ KI9-91~040 8 2~87501 commonly definecl by geometric pr:irn:itives or polyyons, such as quadrilaterals or trianglefJ.

Graphic~ sy~tem 10 incl~ldes a y~3~metry p~oc~tny en~ine 18 and a raster processlncJ enyine ~0 collpled there~o, Engines 18 & 20 comprise separate sllb-f~ystems, èach o Which typically includes mul-tiple pipelined or parallel connected processors. Enyine 18 conventionally implements floating point geometry processing to convert information from modeling or world coordinates and colors to screen coordinates and colors -through several well known processing steps, including tessellation, lighting calculation, clipping, transformation, mapping, etc. Certain of these processing steps are diæcussed fur-ther below as they relate to the present invention. Raster processing engine 20 traditionally imp].ements a scan conversion operation wherein individual pixels are integer processed ~e g., through an edge processor and a span processor coupled thereto) for ultimate screen display.

To facilitate a descript:ion of the uni.que structure/function of a shadow generati.on unit in accordance with the present invention, the operation of a typical .shaded solid rendering system (which inc].udes a geometry processi.ng engine 18 and a raster processing engine 20) i.s initially discussed with reference to the structural/functi.onal hlocks of FIG. 2.

Information in an associated appllca-tion data structure 16, including a 3D object database 30 and a light source database 32, is retrieved by geometry E)rocessing engine 18 of graphics system 10. Objects :in a three-dlmenslonal scene to be two~dimensionally represented are typically composed of surface polygons or speclPied ill a functional or parametric way such as B-spline s-llfaces. Information such as X,Y,Z vertices in world coorclinates and normals to the provided su~faces are commonly :included. Engine 18 tessellates the provided informa-tion and lmplements well known lighting equations te.g., Gouraud shading) using retrieved, predefined ambient, diPP-Ise and specular lighting components.
.

'~'~; . , . ' KI9~91-040 9 2~87~1 During the lighting ca:Lcu:lation process 34, the color and intensity at each vertex :is cletermine~ hy l:Lgh-ting e~uation~
which con~ider obje~l and ~rnbierlt;/:Li~J~t source color, direction of an~ L-J~I~ v~o~or, r~urfac~è ~nd/or vertex normals, and the direc~ion o~ t~he reflection vector i~
specular highlights are desired Ou-tput in world coordinates from block 34 are polygorl data vertices (X,Y,Z) and lighting intensity levels (~,G"B). This information is then clipped, projected and mapped at process block 36 to the desired viewport in screen coord.inates.

Input to raster processing enyine 20 are the polygon vertices specif:Led as in-teyer X,Y,Z coordinates in ~creen space, and the R,G,B components of intensity This information is fed to eclye processing 3~ which decomposes each triangle, for example, into a series of horizontal spans by interpolating the X,Z,R,G and B values with re~pect to Y between the vertices Each span on a given scan line (Y value) consists of X(l) and X(r) (i e , the horizontal left (l) and horizontal right (r) l.lmits of the span), R,G,B
and Z at the left edge and the partial derivatives of R,G,B
and Z with respect to X (dR/dX, dG/dX, dB/dX, dZ/dX) as determined from the plane ec~ua-tiorl. The Y address (Y-ADDR) is made directly available to a frame buffer 42 and a Z-buffer 44 via line 39 A span processing operation 40 decomposes each horizon-tal span in-to individual pixels (and considers storing of intensity va:l.ues to frame buffer 42) according to the following process:

R(X(l),Y) = R(L) G(X(l),Y) - G(l) B(X(l),Y) = ~(1) Z(X(l),Y) ~ Z(l) .
For X = X(l)~]. to X(r) R(X,Y) ~ R(X-l,Y) ~ dR/dX
G(X,Y) = G(X-l,Y) 1 dG/clX
B(X,Y) = B(X-l,Y) + dB/dX
Z(X,Y) = Z(X-l,Y) -~ dZ/dX

.

~ KI9-91-040 2087~01 If ( Z(X,Y) > Z-~UFFER(X,Y) ) Then Update R(X,Y), G(X,~ 3(X,'~ BUF~ER(~

The derived X address (X-AVDR) is macle ava.ilable to both frame buffer 42 and Z-buffer 44, while the derived intensity values (R,G,B) are available only to buffer 42 and the computed Z depth value (Z VALUE) only to Z-buf~er 44 Frame buffer 42 and Z-buffer 44, which can be accessed in parallel, each have a single entry for each ~X,~) pixel address. The comparison (Z(X,Y) Z-BUFFER ~X,Y)) ensures that only the pixels in a triany]e that are closest to a viewer are written to frame buffer 42 after the entire scene has been processed. Typically, one pass through the data structure is sufficient to render a scene with light source shading of objects and hidden surface removal. Also, the order o~ processing is generally irrelevant, so no sorting of data is required. The calculation of R,G,B and Z a~ a function of X and Y, is referred to :in the art as 6-axis interpolation, since ea;-h ver-tex represents a point in 6-space. Buffer 42 is used to refresh a display monitor (not shown) via line 45.

Of the known classes of shadow generation alyorithms (see Background section), most if not all, are implemented in the geometry processing phase of the graphics system. For example,; see the Z-buffer shadow computation approach described by L. Williams :Ln the above-re~erenced article, entitled "Casting Curved Shadows on Curved Surfaces.~ Such an approach, however, is simi].ar -to having dual pipelines and managing them slmultaneotlsly. Obviously, with the addition of multiple light sources the lmage-precision calculations of the Wllliams approach quickly become too complex.

In contrast to most previous shadowing approaches, the present inventive shadowing technlque is implemented within the raster proces.sing sub-sys-tem of the yraphics unit. The KI9-91 040 ~08 7 ~ a 1 approach is func-tionally sirn:ilar. Ln several ways to that described above with respect to E'rG. 2, however, a ~econd Z-bufer is added to tra-,lc p:Lxel depth o~ y~ometric primitives rela-t,:i.ve to one or more dlf~crete llyht ~ource~
applied to the scene to be represented, Mul~iple pas~es are made throuyh the data struct1lre for each liyht source capable of producing shadows in the scene. An overview of this technique is next discussed with reference to FIG. 3, which depicts a modified s-tructllral/functional graphics system pursuant to the presen.t invention.

As shown, information from an associated application data structure 116 (includiny a 3D object database 130 and a light source database 132) of a host computer 11~ is initially fed to a geomstry processiny engine 118 of graphics system 10~. System 100 also includes a raster processing engine 120 coup].ed to engine 118. Operationally engine 118 begins processing by tessellating and applying lighting equations 134 to the retrieved data. In addition to forwarding, for example, polygon data verkices X,Y,Z in world coordinates and color intensities R,G,B to the clip, project and map functlon 136 of eny.ine 118, function block 134 also generates polyyon data vertices Xs,Ys,Zs in world coordinates. Verti.ces Xs,Ys,Zs represent the retrieved data vertices defined .in relation to the light source(s) applied to the scene to be rendered, l.e., polygon data vertices Xs,Ys,Zs are generated in each of one or more light source view coordinates Xs,Ys,Zs (wherei.n s=l,...N light sources).

A second Z-buffer 144 :is provided pursuant to the present invention for accommodati.ny Zs values representative of the pixel depth from the point of VJ.eW o a predefined light source rather than from a viewer. The pixel addresses to the second Z-buffer (i.e., the light so~lrce view buffer) are designated Xs,Ys and are not -the same X,Y pixel addresses which access a frame-buffer 14~ and a first Z-buffer 143 ti-e-, viewpoint Z-buffer).

As described further below, the shadow generation algorithm set forth herein operates by taki.ny multiple passes over the , ~

KI9-91-040 2 0 8 7 ~ a 1 data, dependiny UpOII -th~ number of light sources in the scene. In a first pass, the scene is rendered to the frame~buffer in the u~ual way, Itp~,atiny the ir~t Z-buer with the viewpo:L7lt %~ lue, ~ut ~tsirlg only t,he ambient component of the light spec.if:3.cation to determine the corresponding color values R,G,~ to be stored. Other than using only ambient liyht in the lightiny calculation, fir.st pass processing is essentially as descr.ibed above With reference to FIG. 2.

Input to raster processing enyine 1~0 are the polygon vertices specified as integer X,Y,Z coordinates and Xs,Ys,Zs coordinates in ~creen space, and the R,G,B components of intensity due to the defined lightiny. This information is fed to polygon edge processing 13~ wh.tch decomposes each triangle, for example, into a ser:ies of horizontal spans by interpolating the X,Z,Xs,Ys,%s,R,G & B values With respect to Y between the vertices. Each span on a given scan line (Y value) consists of X(l) and X(r) (i.e., the horizontal left (l) and horizontal right (r) .limits of the span3, R,G,B,X,Z,Xs,Ys & Zs at: the Jeft edge and the partial derivatives of Z,R,G,B,Xs,Ys ~ 7,s with respect to X as determined from the plane equat:ioll. T}le ~ address (Y-ADDR) is made directly available to frame buffer 142 and first Z-buffer 143 via line 139. A span processing operation 140 subse~uently decomposes each horizontal span into individual pixels as described further below. Buffer 142 is used to refresh a display monitor (not shown) via line 145.

After completiny the first: pass, the frame-buffer contains the scene as it would appear if no speciic light sources, besides ambien-t l:ight.incJ, were identif.ied. The resultant screen image appears dim. The f:i.rst Z-buffer (viewpoint Z-buffer) contains the Z values for the pixels which represent the closest points to the viewer in viewpoint coordinate space.

For each defined light source, two additional processing passes through the data structure are re~uired. In the first of these, a transformation matrix is set up in such a , . . - , . .

~: ` KI 9 -91-040 2 0 8 7 ~ 0 1 way that the viewpolnt is mo~ed to the po~ition of a lighk source. The ~cene 1s then renclere~d .in the u~ual way except that the frarne-buffer i~ not updated, ànd thè ~econd, light source view Z-btlffer is used :Ln~;te~d of the irst, vièwpc)int view Z-buffer.

Span processing in engine 120 is repre#ented as:

Z(X(l),Y) = Z(l) For X = X(l)~l to X(r) Z(X,Y) = Z(X-l,Y) ~ cdZ/dX
If (Z(X,Y) > SECOND_ZBUFFER ( X, Y ) ) Then Update SECOND_ZBUFFER(X,Y) In the next pass, the shaded imaye and shadows are generated in parallel. The transform/clip/map process 136 generates vertex data relative to both the light source and the viewpoint, and the lightiny calculations generate the intensity based on the diffuse and the specular components of the subject light source with no ambient lighting considered. Input to edge processing then consists of triangles with 9-valued vertices, ancl interpolation proceeds in 9-space. In particular, each vertex consists of:

X ~ screen space X coordinat;e (from viewpoint) Y - screen space Y coordinate (rom viewpolnt) Z - screen space Y, coordLnate (.Erom viewpolnt) : R - red component of intensity for light G - green component of :Lnten~ity for light B - blue component of intensity :for liyht Xs - screen space X coordinate (from light source) Ys - screen space Y coordinate (from light source) Zs - screen space Z coordinate (from light source) .
Edge processing is similar to that descrihed above with respect to FIG. 2, with the addi-tion of edge and derivative calculations for Xs,Ys,Zs (l.e., clXs/dX,dYs/dX,cdZs/dX). As KI9-91-040 l~ 2087~01 noted, a span now conslst of X(l) and X~r), kh~ left ed~e values of R,G,~,Z,X~,Y~,Zs and ~he part.ial deriva~ive~ of R,G,B,Z,Xs,Ys and Zs w.ith respect to X The~e can all be generated .~rom the plane equation~ in X and Y

Span processing proceeds by i.nterpolatiny R,G,B,Z,%s,Ys,Zs between X(l) and X(r), using both the first and second Z-buffers 143 & 144, respective].y, to decide whether to update the frame buffer 142. Since processing only involves the diffuse and specular components for one light source, a compare in the second Z-buffer (li.ght source view Z-buffer) that indicates a no write condition corresponds to a shadow, with only the ambient backyround remaining in the frame buffer. When the frame buffer .is updated, it is accomplished in a cumulative manner with each computed intensity value being added to the value already there. In this way, multiple light sources are accumulated on the image. As displayed on the monitor (not shown), the visual effect after each light source i~ processed is of turning a light "on". The algorithm for span processing on the tthird pass is:

R(X(l),Y) = R(l) G(X(l),Y) = G(l) B(X(l),Y) = B(l) Z(X(l),Y) = Z(l) Xs(X(l),Y) = Xs(l) Ys(X(l),Y) = Ys(l) Zs(X(l),Y) - Zs(l) For X = X(l)~l to X(r) R(X,Y) = R(X-l,Y) + dR/dX
G(X,Y) - G(X-l,Y) t dG/dX
B(X,Y) = B(X-l,Y) + dB/dX
Z(X,Y) = Z(X-l,Y) + dZ/dX
Xs(X,Y) - Xs(X-l,Y) f- dXs/dX
Ys(X,Y) = Ys(X-l,Y) ~ dYs/dX
Zs(X,Y) = Zs(X-l,Y) ~ dZs/dX

If ( Z(X,Y) >= FIRST_ZBUFFER(X,Y) ) and ,. . ..
, KI9-91~040 2087~0~

( Zs(X,Y) ~= SECON~-ZBUFFER(%s,'~7 ) Then Rframebuffer~%,~) - R(X,Y) ~ t~rtmebuffe~(X,Y) GframebufEer(X,Y) = G~%,Y) -~ Rframebuffe~(%,Y) Bframebuffer(X,Y) = B(X,Y) -~ ~framebuffer(X,Y) A more specific explanation of the processing technique of the present invention :i5 next described with reference to the functional flowcharts of FIC.S 4~, 4B, 4Cl & 4C2.

Referring first to FIG. 4A, upon initiating processing, 200 ~'Start,~ input information stored in data memory is retrieved, 202 "~etrieve Given Information Stored In Data Memory.'i Stored information typically include# a list of 3D
geometric primitives (polygons, surfaces, etc.) and their attributes (color, surface properties, etc.) which compose the scene to be rendered, speciied in or tran~formable to world ~ coordinates. In addition, viewing information including po~ition of the viewer in world coordinates, size and orientation of viewing window in world coordinates, screen viewport in device coordinates, clipping planes defined in world coordinates ancl the type of projection (i.e., orthogonal or perspective~ are defined. Also, a list of N light sources and their at-tributes (color, spread, ~type,~ etc.~ and position (or light ray direction if infinite) are specified in world coordinates. Typically, an ambient light component is also predefined. The ambient Light component simulates backqrollnd l:lqht illuminaking a surface totally in shadow, and is constant for every surface ` in the scene~
.
Next, the frame buffer, first Z-buffer and second Z-buffer are cleared, 204 ~Clear Frame Buffer, First Z-Buffer, &
Second Z-Buffer," and a viewpo:int transformation matrix Mv is created 206 "Create Viewpoint Transformation Matrix Mv."
Matrix Mv is used to transform the vertices of scene primitives in world coordinates to device coordinates viewed from the viewpoint position using the given window and :-, ., .: :, ' ` ' :` ` :: " i !

KI9-91-040 l6 2 0 8 7 ~ O 1 viewport size and orientation. Thereafter, a liyht source transformation matrix Mi, where~ reated for each of the N clefined ~iyh~, Snt1rCe~, ~a~ "C~eate Light Source Tran~formation Matrlx Mi,, Wherein i=l.,,~ Liyht Sources." For each liyht S011rC'e, matrix Mi i~ u~ed to transform the vertices of ~he ~cene primitives in world coordinates to device coordinates viewed from each liyht source position (i) throuyh a sui-table selection of a window and viewport mappiny so as to place the total scene within the confines of the light ,source Z~buffer device coordinate ranye, One skilled in the art Will understand how to implement such a selection.

The first processiny pass (PASS 1) throuyh the data structure begins with routine yeometry processing and includes transformation of trianyle vertice~ to device coordinates u iny matrix Mv, 210 "For Each Primitive In The Scene Perform Geometry Processiny ~ Transform Triangle Vertices To Device Coordinates Using Matrix Mv." Typical geometric processiny includes fetchiny the primitives from data memory, transforminy vertices and control points to world coordinates (if necessary), tessellating primitives into planar polygons ~uch as -triangles (if necessary), performing a clipping test and (if necessary) clipping to a defined view volume. Rach of these manipulations is well ~mderstood by those skilled in the art. Alony with transforming triangle vertices to device coordinates using the matrix Mv, perspective division may be performed, again if recluired. After convertincJ the vertice~ to device coordinates, edges are scan converted into a serie~ o horizontal or vertical spans, 212 "Scan Convert Edges Into Series Of Horizontal Spans." This typically involves computati,on of Bresenham parame-ters or DDA slopes for scan converting of triangle edyes and then scan converting the edges into end points of a series of horizontal tor alternatively vertical) s,oans.

Each X position in each horizontal span is next considered for computation and storac3e of Z values. This is accomplished by first selectiny a horizontal span or , . . : . , :
-., . . ~. . ,, ,: ~ : ..

i KI9-91-040 ~7 2 0 8 r~ ~ 01 processing, 214 "Select. ~ (~lexl) Ho~:izontal Span For Processing," a~ter which a pixel X pos:i.tion in the ~elected span i~ selected, 216 ",Select ~ (~lexl,) P:lxet X Po~ition In Selected Span " A dep-th ~all.le .~nt~ the selec~t~d p.ixel is computed and compared to the cotre~f~pondirly contents o the first Z~buffer, 218 "Compute ~ Vallle Of Selected Pixel And Compare To Corresponding Contents of First Z-Bufer."
Logical inquiry is then conducted into whether the selected pixel is visible to the viewpoint, 220 "Is Selected Pixsl Visible?" If "yes" (meaning that the calculated Z value is closer to the eyepoint than the corresponding value in the first Z-buffer), then the Z~bu.f:fer value is replaced with the just calculated pixel Z val.ue and the frame buffer pixel value is replaced with the correspondiny ambient color value, 222 "Update First Z~Buffer With Pixel Z ~alue &
Update Frame Buffer With Ambient Color Value."

Erom instruction 222 processiny passe~ through junction 224 to in~uire whether all pixels in the subject span have been processed, 226 "All Pixels in Span Selected?"
(Alternatively, if the selected plxel is not visible to the viewpoint, processing proceed~ d;rectly from inquiry 220 through junction 224 to .inquiry 2Z6.) While the answer to inquiry 226 remains "no," proces~i.ng continues to loop back to instruction 216 for selection of a next pixel X position in the selected span. Once all. pixels in a selected span have been processed, inquiry is made into whether all spans have been selected for processiny, 228 "All Spans Selected?"
If "no," processing returns to instruction 214 for selection of a next horizontal span. If "ye~," then first pa~s processing (PASS 1) throuyh the da-ta structure ls complete and processing proceeds to FIG. 4B.

FIG.S 4B, 4C1 ~ 4C2 toyether represent the two novel passes through the data structure pursuant to the present invention which are to be accomplished in combi.nation for each defined light source (i.e., PASS 2i and P~SS 2i~1, wherein i=l...N, and N equals the number of light sources).

.
, KI9-9l-040 ]~ 2 0 8 7 ~ ~1 Initially, a defined light sot1rce :is selected, 230 "Select A
(Next) Defined LicJht Source (:i)," ,1nd the ~econd Z-bufer is cleared, 232 1~Clear Second Z-~u~er " Thereater, or each primitive in the data structure the secc~nd pas~ ~PASS 2i) includes performing geometry proc:essing and transorming triangle vertices to device coordinates (i e , light source view coordinates) usiny the corre~ponding matrix Mi, 234 ~For Each Primitive In The Scene, Perorm ~eometry Processing And Transform Triangle Vertice~ to Device Coordinates Using Matrix Mi."

Again, geometry processing miyht include fetching a primitive from data memory, transforming verticeæ and control points to world coordinates (if necessary~, tessellating primitives into planar polygons (if necessary), performing clip testiny and c1ipping to a view volume (if necessary). Thereafter, edges are scan converted into a series of horizontal spans, 236 "Scan Convert Edges Into Series Of Horizontal Spans," Infftruction 236 includes computing Bresenham parameters or DDA slopes for scan converting of the triangle edges and actual scan conversion of the triangle edges into end points of a series of horizontal (or alternatively vertical) spans.

Depth buffer processing similar to that conducted in FIG. 4A
next occurs. Namely, a horizon-tal span for processing is initially selected, 238 "Selec-t A (Next) Horizontal Span For Processing," and a pixel Xs position therein is identified, 240 "Select A (Next) Pixel Xs Position In Selected Span.~
The depth value of the selected ~s pixel is interpolated and compared to the stored value in the corresponding location of the ~econd Z-huffer, 242 "Compute Z Value of Selected Pixel and Compare to Corresponding Content~ of Second Z-Buffer." In~uiry is then made into whether the selected pixel is visible to the sub~ect light source, 244 "Is Selected Pixel Visible To Light Source (i)?ll If "yes,~' then the corresponding location in -the second Z-buffer is updated with the just calculated pixel depth value Zs, 246 "Update Second Z-Buffer With Pixel Zs Value." Once updated, or if the subject pixel is not visible -to the light source (e.g., KI9-91-040 2 ~ 8 7 ~ ~ ~

its in shadow), processing pasf;es through junction ~48 to in~uiry 250 "All Pixels ~n ~pan ,Selected7" Processing then continues to loop back ~o in~t~uct:L~n 240 to ~elect a next pixel Xs position in the ~elec-led span wn~il all pixel~ in the span have been proce~ed, rrherea~ter, inquiry is made whether all spans have been selected, 252 "All Spans Selected?" If "no," processing returns to instruction ~38 where the next span is selected. Once all ~pans have been selected, the second pass is complete and processing passes to FIG.S 4C1 ~ 4C2 for a third pass (PASS 2i~1) through the data structure.

Again, initial geometry procesSin~J is performed, 256 "For Each Primitive In The Scene Perform Geometry Processing "
Processing includes fetching a primitive from data memory, transforming vertices and control points to world coordinates (if necessary), tessellatiny primitives into planar polygons (if necessary) and clipping to a view volume (if necessary). In addition, vertex normals are to be transformed to world coordinates, i.e., if not already therein. Thereafter, ~ertex colors are calculated, 258 "Calculate Vertex Colors." The co1.ors are calculated from the~specular and diffuse light.ing equations only, using the primitive, surface and color attributes, the vertex normals, the viewing parameters and light source attributes. Next, triangle vertices are transformed -to device coordinates using matrix Mv 260 "Trans:Eorm Tri.angle Vertices to Device Coordinates Using Matrix Mv," and triangle vertices are also transformed to device coordinates using the subject light source matrix Mi, 262 "Transform Triangle Vertices to Device Coordinates Using Matrix Mi." The partial derivatives for the light source device coordlnates are then calculated, 264 "Calculate Partial Derivatives dZs/dXs and dZs/dYs."

Preferably, a proce~sing step is inserted at this point whereby vertex values of ZB are ad~us-ted toward the sub~ect light source (i) by a bias value equal to the sum of the calculated derivatives. This is needed since the contents of the first Z-bufer and second Z-buffer represent two different view coordinates and the effective resolution of X

,:
, KI9-91-040 2() 2 0 8 7 5 ~1 and Y for the polyyon eclges wi].l be dlffe~ent in certain cases; i.e., A ranye o pl~ rl X,~ space will not map to the same addres~ :Ln X~ pac,e -ir~ th~ ~c~ond Z-huer, T~le problem is lllustrated in E'~. '; A~ shown th~r~in, Z-bu~er 1 accommodate~ Z values from the eyepoint view, while Z-buffer 2 contains Z values from a light ~ource view. If the depicted effect is not considerecl, there will be certain pixels which will be erroneously proceseed as in shadow, as if shielded by a neighboring pixel(s). In FIG 5, for example, pixel E has a Z,s value of 8, but maps to Z-buffer 2 with a value of 14 (calculated when the Zs values are interpolated against Xs,Ys instead of X,Y~ This pixel would be determined to be in shadow without some compensation The problem is not unique to the present invention, e.g , see the above-referenced Williams artic].e The solution presented therein was to add a bias to the calculated Zs values to prevent the effect when each point is transormed to the light source vi~_w Since the present invention interpolates within the polygon rather than transforming each point, the bias J.S added -to the Zs values of, for example, the triangle vertices. Furthermore, greater accuracy is obtained by calculating the ~iases as a function of the actual surface gradient of the polygon with respect to the light source viewpoint Xs,Ys plane. This yields a more accurate result then the constant bias proposed in the Williams article. In particular, when the edge processing function is calcul.ating the derivatives for span generation, it also calculates the values of dZs/dX~ and dZs/dYs, and adjusts the vertex Zs value by ths ~um o the derlvatives.
This has the effect of moving the -triangle closer to the light source by its surface gradient in Xs and Ys and causes proper registration o the Xs,Ys plane.

Continuing wi.th the third pass processing (PASS 2i+1) of FIG. 4Cl, scan conver.sion of edges into a series of horizontal spans next occurs, 268 "Scan Convert Edges Into Series of Horizontal Spans." This involves computation of the Bresenham parameters or DDA slopes for scan converting the triangle edges to produce X,Y,Z,R,G,B,Xs,Ys & Zs, and ,:, - ,, , .

KI9-91-040 21 ~ g 7~

then scan converting of the tr:iangle edye~ into the end points of a ~erie~ of horizon-tal ~or ~ternatlvely vertical~
spans.

A pixel X position is identi~iecl by irst ~electing a horizontal span for processiny, 270 "Select A (Next) Horizontal Span For Processing," and then selecting a pixel position within that span, 272 "Select A (Next) Pixel X
Position In Selected Span " The Z value of the selected pixel is then computed and compared to the contents of the viewpoint depth buffer, 274 "Compute Z Value of Selected Pixel and Compare To CorresponclincJ Content~ of First Z-Buffer." Next, the Xs,Y~,Zs values of the selected pixel are computed/interpolated and the calculated Zs value is compared to the contents of the second Z-bufer at location Xs,Ys, 276 "Compute Xs,Ys,Zs Values of Selected Pixel and Compare Zs to Contents of Second Z-Buffer At Location Xs,Ys." Inquiry is then made into whether the selected pixel is visible from both the viewpoint and the subject light source (i), 278 "Is Selected Pixel Visible From Viewpoint and Light Source (i)?" If "yes," then the Rs,Gs,Bs values of the selected pixel are calcu]ated and added to the frame~ bufer at X,Y, 280 "Calculats Rs,Gs,Bs Values of Selected Pixel and Add to Frame Buffer At X,Y." In other words, if the pixel is visible to both the eyepoint and the selected light source, then intensity values for the pixel are determined and added to the con-tents of the frame buffer at X,Y.

If either of the conditi.ons in :Lnc~uiry 278 are not met, or i met then after processing instruction 280, flow is through ~unction 282 to inquiry ~4 t'All Pixels In Span Selected?" If "no," then proceæ~ing returns to instruction 272 where a next pixel X in the selected span i~ identiied for processing. Once all pixels in a span have been processed, then inquiry is made into whether all spans have been processed, 286 "All Spans Selected?" If "no,"
processing loops back to instr-lction 270 where a next horizontal span is selected. Once all pixels and spans have been processed, inquiry is made whether all light sources KI9-9l-040 7~ 2087.i~1 have been selected, 2~8 "~]l r,i~ht, Sot1rce~ (i) Selected7" If "no," then processirly return~ to FI~, 4e a~ instruction 230, i.e., ~elect a next deined l.i~J~1-t ~urce (l~ t~ accomplish PASSES 2i, & 2.i-~ nce al.l. I..tgh-t f~o~rce~ have been selected the frame bufer corlt~ir1~ ~he ~um of color intensities due to any ambient l:iyh-t and each o~ the defined light sources (i), includiny any shadow~ deined thereby, and processing i,s terminated, 290 "End."

It will observed form the above discussion that a unique shadow generation approach ha,s been provided capable of efficiently handliny multiple 1ight sources. Significantly reduced processing times in comparison with prior shadow rendering techniques are attained, t,hereby improving user interactivenes,s with the graphics system Shadow processing pursuant to the invention attains a performance comparable to non-shadowed, shaded polygons with hidden surfaces removed. The approach i,s implemented primarily in hardware and maintains a mapping between a viewpoint and a light source by interpolating the respective Z-buffer addresses into dual Z-buffer,s during ~pan generation. Novel characteristics include the use o 9-axis interpolation instead of the traditional 6-axis, accommodation of multiple light sources throuyh a simple technique of multiple passes over the scene, and correction of depth buffer reyistration errors by Z displacement in the direction of a light source view gradient in Xs,Ys space (light source view coordinates).

While the invention has been de~cribed in detail hereln in accordance with certain preferred embodiments thereof, many modifications and changes therein may be affected by those skilled in the art. Accordingly, it is intended by the appended claims to cover a~.l S~lCh moclifications and changes as fall within the true spirit and scope of the invention.

.

Claims (20)

  1. The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:-l. In a computer graphics display system, a method for rendering a scene formed of at least one geometric primitive as a pixel image with a shadow, said computer graphics display system including a first Z-buffer, a second Z-buffer and a frame buffer, each of said buffers having storage locations corresponding to the pixels forming said image, said image having a defined viewpoint and light source, each primitive being represented by one or more pixels (X,Y) relative to said defined viewpoint and one or more pixels (Xs,Ys) relative to said defined light source, said pixel image rendering method comprising the steps of:

    (a) for each of said pixels (X,Y) representative of said at least one geometric primitive:

    (i) generating a first depth value (Z) representative of the depth from said defined viewpoint to said pixel, and (ii) saving said first depth value (Z) in the corresponding pixel location of said first Z-buffer if the first depth value (Z) defines a location equal or closer to said viewpoint than a depth value stored in said corresponding pixel location of said first Z-buffer;

    (b) for each of said pixels (Xs,Ys) representative of said at least one geometric primitive:

    (i) generating a second depth value (Zs) representative of the depth from said defined light source to said pixel, and (ii) saving said second depth value (Zs) in the corresponding pixel location of said second Z-buffer if the second depth value defines a location (Zs) equal or closer to said light source than a depth value stored in said corresponding pixel location of said second Z-buffer; and (c) for each of said pixels (X,Y) forming said image:

    (i) generating, using only said defined light source, a representative color value (Rs,Gs,Bs) for said pixel (X,Y), and (ii) adding to a corresponding (X,Y) location of said frame buffer said representative color value (Rs,Gs,Bs) if the corresponding depth value of said first Z-buffer identifies a pixel visible to said viewpoint and the corresponding depth value of said second Z-buffer identifies a pixel visible to said light source.
  2. 2. The pixel image rendering method of claim 1, wherein said image to be rendered has a defined ambient lighting value and wherein said step (a) includes:

    (iii) generating, using only said ambient lighting value, a representative color value (R,G,B) for said pixel (X,Y) and storing said representative color value (R,G,B) in the corresponding (X,Y) pixel location of said frame buffer.
  3. 3. The pixel image rendering method of claim 2, wherein said at least one geometric primitive is defined by a data structure of one or more polygons and wherein said generating step (a)(iii) includes scan converting said polygons into a series of pixel spans and placing for each span said representative color value (R,G,B) from the end values of said span for each pixel in said span.
  4. 4. The pixel image rendering method of claim 1, wherein a plurality N of light sources are defined and each primitive is represented by one or more pixels (Xs,Ys) (s=1,2...N) relative to each of said N defined light sources, and wherein said method further comprises the steps of repeating said steps (b) & (c) in combination for each of said N
    defined light sources, said second Z-buffer bring cleared prior to each combined repetition of said steps (b) & (c).
  5. 5. The pixel image rendering method of claim 1, wherein each of said at least one geometric primitive is defined in a data structure by one or more polygons and wherein said generating step (a)(i) includes scan converting said polygons into a series of pixel spans and interpolating for each span from the end values of said span said first depth value (Z) for each pixel in said span.
  6. 6. The pixel image rendering method of claim 1, wherein each of said at least one geometric primitive is defined in a data structure by one or more polygons and wherein said generating step (b)(i) includes scan converting said polygons into a series of pixel spans and interpolating for each span from the end values of said span said second depth value (Zs) for each pixel in said span.
  7. 7. The pixel image rendering method of claim 1, wherein said at least one geometric primitive is defined by a data structure of one or more polygons and said generating step (c) (i) includes scan converting said polygons into a series of pixel spans and interpolating for each span said representative color value (Rs,Gs,Bs) from the end values of said span for each pixel in said span.
  8. 8. The pixel image rendering method of claim 1, wherein said at least one geometric primitive is defined by a data structure of one or more polygons, said polygons being defined by their vertices, and wherein said generating step (b) (i) includes adjusting the second depth value (Zs) at each of said polygon vertices relative to said light source to substantially eliminate any misregistration between said one or more pixels (X,Y) relative to said defined viewpoint and said one or more pixels (Xs,Ys) relative to said defined light source.
  9. 9. The pixel image rendering method of claim 1, wherein the order of said (a) processing and said (b) processing is interchangeable,
  10. 10. In a computer graphics display system having a geometry processing engine and a raster processing engine, a method for rendering a scene formed of at least one geometric primitive as a pixel image with a shadow, said at least one geometric primitive being defined by a data structure, said computer graphics display system further including a first Z-buffer, a second Z-buffer and a frame buffer, each of said buffers being coupled to said raster processing engine and each having storage locations corresponding to the pixels forming said image, said image having a defined ambient lighting value, viewpoint and light source such that each primitive has one or more pixels (X,Y) associated therewith in a first, viewpoint coordinate space, and each primitive has one or more pixels (Xs,Ys) associated therewith in a second, light source coordinate space, said pixel image rendering method comprising the steps of:

    (a) processing said data structure through said geometry processing engine and said raster processing engine of said computer graphics display system so as for each of said one or more pixels (X,Y), said engines:

    (i) generate a first depth value (Z) representative of the distance from said defined viewpoint to said pixel, (ii) save said first depth valve (Z) in the corresponding pixel. location of said first Z-buffer if the first depth value (Z) is equal or closer to said viewpoint than a depth value stored in said corresponding (X,Y) pixel location of said first Z-buffer, and (iii) determine, using only said ambient lighting, a representative color value (R,G,B) for said pixel and store said determined representative color value (R,G,B) at the corresponding (X,Y) pixel location of said frame buffer;

    (b) processing said data structure through said geometry processing engine and said raster processing engine of said computer graphics display system so as for each of said one or more pixels (Xs,Ys), said engines:

    (i) generate a second depth value (Zs) representative of the distance from said defined light source to said pixel, and (ii) save said second depth value (Zs) in the corresponding pixel location of said second Z-buffer if the second depth value (Zs) is equal or closer to said light source than a depth value stored in said corresponding (Xs,Ys) pixel location of said Z-buffer;
    and (c) processing said data structure through said geometry processing engine and said raster processing engine of said computer graphics display system so as for each pixel (X,Y) of said pixel image, said engines:

    (i) calculate, using only said defined light source, a representative color value (Rs,Gs,Bs) for said pixel, and (ii) add to the corresponding (X,Y) pixel location of said frame buffer said corresponding representative color value (Rs,Gs,Bs) if the corresponding depth value of said first Z-buffer identifies a visible pixel in said viewpoint coordinate space and the corresponding depth value of said second Z-buffer identifies a visible pixel in said light source coordinate space.
  11. 11. The pixel image rendering method of claim 10, wherein said pixel image has multiple defined light sources associated therewith, each of said light sources defining a separate light source view coordinate space, and wherein said method further includes repeating said steps (b) & (c) in combination for each of said multiple light sources such that for each pixel said frame buffer contains the cumulative color intensities due to said ambient lighting and said multiple defined light sources.
  12. 12. The pixel image rendering method of claim 10, wherein each of said at least one geometric primitive is defined in said data structure by one or more polygons and wherein prior to said first data structure processing step (a) a viewpoint transformation matrix Mv is created, said generate step (a)(i) including for each primitive transforming the associated polygons to device coordinates using said created matrix Mv, scan converting the edges thereof into a series of pixel spans, and interpolating from the end values of said spans said first depth value (Z).
  13. 13. The pixel image rendering method of claim 10, wherein each of said at least one geometric primitive is defined in said data structure by one or more polygons and wherein prior to said second data structure processing step (b) a light source transformation matrix Mi is created, said generate step (b)(i) including for each primitive transforming the associated polygons to device coordinates using said matrix Mi, scan converting the edges thereof into a series of pixel spans, and interpolating from the end values of said spans said first depth values (Z).
  14. 14. The pixel image rendering method of claim 10, wherein each of said at least one geometric primitive is defined in said data structure by one or more polygons and wherein said calculate step (c)(i) includes scan converting said polygons into a series of pixel spans and interpolating for each span said representative color value (Rs,Gs,Bs) from the end values of said span for each pixel in said span.
  15. 15. The pixel image rendering method of claim 10, wherein said at least one geometric primitive is defined in said data structure by one or more polygons, said polygons being defined by their vertices, and wherein said generate step (b)(i) includes adjusting the second depth value (Zs) at each of said polygon vertices relative to said light source to substantially eliminate any misregistration between said one or more pixels (X,Y) in said viewpoint coordinate space and said one or more pixels (Xs,Ys) in said light source coordinate space.
  16. 16. A computer graphics display system with shadow generation capability for rendering a scene formed of at least one geometric primitive as a pixel image having a defined ambient lighting value, viewpoint and light source such that each primitive has one or more pixels (X,Y) associated therewith in a first, viewpoint coordinate space, and one or more pixels (Xs,Ys) associated therewith in a second, light source coordinate space, said computer graphics display system comprising:

    a geometry processing engine for converting said at least one geometric primitive into device X,Y,Z
    coordinates relative to said viewpoint and into device Xs,Ys,Zs coordinates relative to said light source;

    a frame buffer having storage locations corresponding to the pixels forming said image for receiving representative pixel color values;

    a first Z-buffer having storage locations corresponding to the pixels forming said image for storing first depth values (Z) representative of the depth of selected pixels (X,Y) in said image relative to said viewpoint;

    a second Z-buffer having storage locations corresponding to the pixels forming said image for storing second depth values (Zs) representative of the depth of selected pixels (Xs,Ys) in said image relative to said light source;

    a raster processing engine coupled to receive the device coordinates output from said geometry processing engine and coupled to each of said frame buffer, first Z-buffer and second Z-buffer, said raster processing engine including:

    (i) means for generating for each pixel (X,Y) associated with said at least one geometric primitive a pixel depth value (Z) and for storing said value (Z) in a corresponding (X,Y) location of said first Z-buffer when said pixel depth value (Z) is equal or closer to said viewpoint than a depth value stored in said corresponding location of said first Z-buffer, (ii) means for generating for each pixel (Xs,Ys) associated with said at least one geometric primitive a second pixel depth value (Zs) and for storing said value (Zs) in a corresponding (Xs,Ys) location of said second Z-buffer when said second depth value (Zs) is equal or closer to said light source than a depth value stored in said corresponding location of said second Z-buffer, (iii) means for generating a representative color value (Rs,Gs,Bs) for each pixel forming said image using only said defined light source, (iv) means for adding to a corresponding (X,Y) pixel location of said frame buffer each of said generated representative color values (Rs,Gs,Bs) for said pixels if the corresponding depth value in said first Z-buffer identifies a pixel visible to said viewpoint and the corresponding depth value in said second Z-buffer identifies a pixel visible to said light source.
  17. 17. The computer graphics display system of claim 16, wherein said raster processing engine further includes:

    means for generating a representative color value (R,G,B) for each pixel forming said image using only said ambient lighting value; and means for adding to a corresponding (X,Y) pixel location of said frame buffer each of said generated representative color values (R,G,B) for said pixels at the corresponding (X,Y) pixel location of said frame buffer,
  18. 18. The computer graphics display system of claim 16, wherein said raster processing engine generating means (i) includes:

    means for scan converting said at least one geometric primitive into a series of pixel spans; and means for interpolating said first depth value (Z) for each pixel in said span from the end values of said span.
  19. 19. The computer graphics display system of claim 16, wherein said raster processing engine generating means (ii) includes:

    means for scan converting said at least one geometric primitive into a series of pixel spans; and means for interpolating said second depth value (Zs) for each pixel in said span from the end values of said span.
  20. 20. The computer graphics display system of claim 16, wherein said raster processing engine generating means (iii) includes:

    means for scan converting said at least one geometric primitive into a series of pixel spans; and means for interpolating said representative color value (Rs,Gs,Bs) for each pixel in said span from the end values of said span.
CA002087501A 1992-01-29 1993-01-18 Computer graphics display and system with shadow generation Abandoned CA2087501A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US7/827,232 1992-01-29
US07/827,232 US5377313A (en) 1992-01-29 1992-01-29 Computer graphics display method and system with shadow generation

Publications (1)

Publication Number Publication Date
CA2087501A1 true CA2087501A1 (en) 1993-07-30

Family

ID=25248652

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002087501A Abandoned CA2087501A1 (en) 1992-01-29 1993-01-18 Computer graphics display and system with shadow generation

Country Status (5)

Country Link
US (1) US5377313A (en)
EP (1) EP0553973A3 (en)
JP (1) JPH0683979A (en)
KR (1) KR970003325B1 (en)
CA (1) CA2087501A1 (en)

Families Citing this family (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993000650A1 (en) 1991-06-28 1993-01-07 Hong Lip Lim Improvements in visibility calculations for 3d computer graphics
IL109462A0 (en) * 1993-04-30 1994-07-31 Scitex Corp Ltd Method for generating artificial shadow
AU6783594A (en) * 1993-05-10 1994-12-12 Apple Computer, Inc. Computer graphics system having high performance multiple layer z-buffer
US5729672A (en) * 1993-07-30 1998-03-17 Videologic Limited Ray tracing method and apparatus for projecting rays through an object represented by a set of infinite surfaces
GB9315852D0 (en) * 1993-07-30 1993-09-15 Video Logic Ltd Shading three-dimensional images
US5821941A (en) * 1994-08-12 1998-10-13 Dassault Systemes Of America, Corp. Geometric constraints between related elements in different 2-dimensional views
EP0697679A3 (en) * 1994-08-12 1998-07-01 Dassault Systemes of America Computerized drawing method
US5615321A (en) * 1994-08-12 1997-03-25 Dassault Systemes Of America Corp. Automatic identification of geometric relationships between elements of a computer-generated drawing
JP3252623B2 (en) * 1994-11-09 2002-02-04 松下電器産業株式会社 Shape model generator
JPH08249494A (en) * 1995-03-09 1996-09-27 Sharp Corp Z-buffer system hidden-surface removal device
JP3570576B2 (en) * 1995-06-19 2004-09-29 株式会社日立製作所 3D image synthesis and display device compatible with multi-modality
US5864342A (en) * 1995-08-04 1999-01-26 Microsoft Corporation Method and system for rendering graphical objects to image chunks
US5870097A (en) 1995-08-04 1999-02-09 Microsoft Corporation Method and system for improving shadowing in a graphics rendering system
US5761400A (en) * 1995-08-28 1998-06-02 Apple Computer, Inc. Method and system for increasing the speed of a Z-buffer process
US5748863A (en) * 1995-10-06 1998-05-05 International Business Machines Corporation Method and system for fast interpolation of depth buffer values in a computer graphics display system
JP3099940B2 (en) * 1995-12-25 2000-10-16 日本電気株式会社 3D graphics controller
US5739819A (en) * 1996-02-05 1998-04-14 Scitex Corporation Ltd. Method and apparatus for generating an artificial shadow in a two dimensional color image
US5844566A (en) * 1996-02-12 1998-12-01 Dassault Systemes Method and apparatus for controlling shadow geometry on computer displays
US5774111A (en) * 1996-02-12 1998-06-30 Dassault Systemes Method and apparatus for providing a dynamically oriented compass cursor on computer displays
GB2312141B (en) * 1996-04-11 1998-04-22 Discreet Logic Inc Processing image data
GB9611939D0 (en) * 1996-06-07 1996-08-07 Philips Electronics Nv Stereoscopic image display driver apparatus
US6104842A (en) * 1996-06-10 2000-08-15 Integrated Device Technology, Inc. Geometry processing of digital video models and images
US6046746A (en) * 1996-07-01 2000-04-04 Sun Microsystems, Inc. Method and apparatus implementing high resolution rendition of Z-buffered primitives
US6018350A (en) * 1996-10-29 2000-01-25 Real 3D, Inc. Illumination and shadow simulation in a computer graphics/imaging system
US5936629A (en) * 1996-11-20 1999-08-10 International Business Machines Corporation Accelerated single source 3D lighting mechanism
CA2227531C (en) * 1997-01-20 2003-03-18 Hitachi, Ltd. Graphics processing unit and graphics processing system
US7616198B2 (en) * 1998-02-20 2009-11-10 Mental Images Gmbh System and computer-implemented method for modeling the three-dimensional shape of an object by shading of a two-dimensional image of the object
DE19714915A1 (en) * 1997-04-03 1998-10-08 Gmd Gmbh Image display method and device for carrying out the method
GB9716251D0 (en) 1997-08-01 1997-10-08 Philips Electronics Nv Attribute interpolation in 3d graphics
US5933156A (en) * 1997-12-03 1999-08-03 Margolin; Jed Z-Buffer for row addressable graphics memory with flash fill
US6501481B1 (en) 1998-07-28 2002-12-31 Koninklijke Philips Electronics N.V. Attribute interpolation in 3D graphics
US6369816B1 (en) * 1998-11-12 2002-04-09 Terarecon, Inc. Method for modulating volume samples using gradient magnitudes and complex functions over a range of values
US6356265B1 (en) * 1998-11-12 2002-03-12 Terarecon, Inc. Method and apparatus for modulating lighting with gradient magnitudes of volume data in a rendering pipeline
US6411296B1 (en) * 1998-11-12 2002-06-25 Trrarecon, Inc. Method and apparatus for applying modulated lighting to volume data in a rendering pipeline
US6426749B1 (en) * 1998-11-12 2002-07-30 Terarecon, Inc. Method and apparatus for mapping reflectance while illuminating volume data in a rendering pipeline
JP3258286B2 (en) * 1998-12-15 2002-02-18 インターナショナル・ビジネス・マシーンズ・コーポレーション Drawing method and drawing apparatus for displaying image data of a plurality of objects in which translucent and opaque objects are mixed on a computer display screen
US20020190984A1 (en) * 1999-10-01 2002-12-19 Larry D. Seiler Voxel and sample pruning in a parallel pipelined volume rendering system
US6717577B1 (en) 1999-10-28 2004-04-06 Nintendo Co., Ltd. Vertex cache for 3D computer graphics
US6618048B1 (en) 1999-10-28 2003-09-09 Nintendo Co., Ltd. 3D graphics rendering system for performing Z value clamping in near-Z range to maximize scene resolution of visually important Z components
US6670958B1 (en) * 2000-05-26 2003-12-30 Ati International, Srl Method and apparatus for routing data to multiple graphics devices
US7119813B1 (en) 2000-06-02 2006-10-10 Nintendo Co., Ltd. Variable bit field encoding
CA2315302A1 (en) * 2000-07-13 2002-01-13 Paul A. Halmshaw Three dimensional imaging system
US6867781B1 (en) 2000-08-23 2005-03-15 Nintendo Co., Ltd. Graphics pipeline token synchronization
US7002591B1 (en) 2000-08-23 2006-02-21 Nintendo Co., Ltd. Method and apparatus for interleaved processing of direct and indirect texture coordinates in a graphics system
US6811489B1 (en) 2000-08-23 2004-11-02 Nintendo Co., Ltd. Controller interface for a graphics system
US6980218B1 (en) 2000-08-23 2005-12-27 Nintendo Co., Ltd. Method and apparatus for efficient generation of texture coordinate displacements for implementing emboss-style bump mapping in a graphics rendering system
US6707458B1 (en) 2000-08-23 2004-03-16 Nintendo Co., Ltd. Method and apparatus for texture tiling in a graphics system
US6937245B1 (en) 2000-08-23 2005-08-30 Nintendo Co., Ltd. Graphics system with embedded frame buffer having reconfigurable pixel formats
US7184059B1 (en) 2000-08-23 2007-02-27 Nintendo Co., Ltd. Graphics system with copy out conversions between embedded frame buffer and main memory
US6700586B1 (en) 2000-08-23 2004-03-02 Nintendo Co., Ltd. Low cost graphics with stitching processing hardware support for skeletal animation
US6664962B1 (en) 2000-08-23 2003-12-16 Nintendo Co., Ltd. Shadow mapping in a low cost graphics system
US6825851B1 (en) 2000-08-23 2004-11-30 Nintendo Co., Ltd. Method and apparatus for environment-mapped bump-mapping in a graphics system
US6636214B1 (en) 2000-08-23 2003-10-21 Nintendo Co., Ltd. Method and apparatus for dynamically reconfiguring the order of hidden surface processing based on rendering mode
US7538772B1 (en) 2000-08-23 2009-05-26 Nintendo Co., Ltd. Graphics processing system with enhanced memory controller
US7034828B1 (en) 2000-08-23 2006-04-25 Nintendo Co., Ltd. Recirculating shade tree blender for a graphics system
US7576748B2 (en) 2000-11-28 2009-08-18 Nintendo Co. Ltd. Graphics system with embedded frame butter having reconfigurable pixel formats
US7196710B1 (en) 2000-08-23 2007-03-27 Nintendo Co., Ltd. Method and apparatus for buffering graphics data in a graphics system
US7061502B1 (en) 2000-08-23 2006-06-13 Nintendo Co., Ltd. Method and apparatus for providing logical combination of N alpha operations within a graphics system
US7103677B2 (en) * 2000-12-06 2006-09-05 Microsoft Corporation Methods and systems for efficiently processing compressed and uncompressed media content
US7114161B2 (en) 2000-12-06 2006-09-26 Microsoft Corporation System and related methods for reducing memory requirements of a media processing system
US6882891B2 (en) * 2000-12-06 2005-04-19 Microsoft Corporation Methods and systems for mixing digital audio signals
US6983466B2 (en) 2000-12-06 2006-01-03 Microsoft Corporation Multimedia project processing systems and multimedia project processing matrix systems
US6961943B2 (en) 2000-12-06 2005-11-01 Microsoft Corporation Multimedia processing system parsing multimedia content from a single source to minimize instances of source files
US6954581B2 (en) * 2000-12-06 2005-10-11 Microsoft Corporation Methods and systems for managing multiple inputs and methods and systems for processing media content
US6959438B2 (en) 2000-12-06 2005-10-25 Microsoft Corporation Interface and related methods for dynamically generating a filter graph in a development system
US6834390B2 (en) * 2000-12-06 2004-12-21 Microsoft Corporation System and related interfaces supporting the processing of media content
US6912717B2 (en) 2000-12-06 2005-06-28 Microsoft Corporation Methods and systems for implementing dynamic properties on objects that support only static properties
US7114162B2 (en) 2000-12-06 2006-09-26 Microsoft Corporation System and methods for generating and managing filter strings in a filter graph
US7447754B2 (en) 2000-12-06 2008-11-04 Microsoft Corporation Methods and systems for processing multi-media editing projects
US6768499B2 (en) 2000-12-06 2004-07-27 Microsoft Corporation Methods and systems for processing media content
US7287226B2 (en) * 2000-12-06 2007-10-23 Microsoft Corporation Methods and systems for effecting video transitions represented by bitmaps
US6774919B2 (en) 2000-12-06 2004-08-10 Microsoft Corporation Interface and related methods for reducing source accesses in a development system
KR100454070B1 (en) * 2001-01-31 2004-10-20 학교법인 중앙대학교 Method for Real-time Toon Rendering with Shadow using computer
US6646640B2 (en) * 2001-02-06 2003-11-11 Sony Computer Entertainment Inc. System and method for creating real-time shadows of complex transparent objects
AUPS028702A0 (en) * 2002-02-01 2002-02-28 Canon Kabushiki Kaisha Efficient display update from changing object graphics
JP4193979B2 (en) * 2003-03-17 2008-12-10 任天堂株式会社 Shadow volume generation program and game device
US6967663B1 (en) * 2003-09-08 2005-11-22 Nvidia Corporation Antialiasing using hybrid supersampling-multisampling
JP2005100176A (en) * 2003-09-25 2005-04-14 Sony Corp Image processor and its method
KR100520707B1 (en) * 2003-10-20 2005-10-17 엘지전자 주식회사 Method for displaying multi-level text data in three dimensional map
US7567248B1 (en) * 2004-04-28 2009-07-28 Mark William R System and method for computing intersections between rays and surfaces
US7525543B2 (en) * 2004-08-09 2009-04-28 Siemens Medical Solutions Usa, Inc. High performance shading of large volumetric data using screen-space partial derivatives
US7626591B2 (en) * 2006-01-24 2009-12-01 D & S Consultants, Inc. System and method for asynchronous continuous-level-of-detail texture mapping for large-scale terrain rendering
GB0613352D0 (en) * 2006-07-05 2006-08-16 Ashbey James A Improvements in stereoscopic imaging systems
US8355022B2 (en) * 2008-11-25 2013-01-15 Sony Computer Entertainment America Llc Method and apparatus for aggregating light sources per-vertex in computer graphics
US20100128038A1 (en) * 2008-11-25 2010-05-27 Sony Computer Entertainment America Inc. Method and apparatus for interpolating color and direction as one entity in computer graphics
EP2234069A1 (en) * 2009-03-27 2010-09-29 Thomson Licensing Method for generating shadows in an image
US8643701B2 (en) 2009-11-18 2014-02-04 University Of Illinois At Urbana-Champaign System for executing 3D propagation for depth image-based rendering
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
CN102737401A (en) * 2011-05-06 2012-10-17 新奥特(北京)视频技术有限公司 Triangular plate filling method in rasterization phase in graphic rendering
US9342817B2 (en) 2011-07-07 2016-05-17 Sony Interactive Entertainment LLC Auto-creating groups for sharing photos
US9300946B2 (en) 2011-07-08 2016-03-29 Personify, Inc. System and method for generating a depth map and fusing images from a camera array
US9449118B2 (en) * 2011-09-29 2016-09-20 Siemens Product Lifecycle Management Software Inc. Hybrid hidden-line processor and method
KR20140142863A (en) * 2013-06-05 2014-12-15 한국전자통신연구원 Apparatus and method for providing graphic editors

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4625289A (en) * 1985-01-09 1986-11-25 Evans & Sutherland Computer Corp. Computer graphics system of general surface rendering by exhaustive sampling
US4737921A (en) * 1985-06-03 1988-04-12 Dynamic Digital Displays, Inc. Three dimensional medical image display system
US4855938A (en) * 1987-10-30 1989-08-08 International Business Machines Corporation Hidden line removal method with modified depth buffer
JP3005981B2 (en) * 1988-07-14 2000-02-07 ダイキン工業株式会社 Shadow processing method and apparatus
US5083287A (en) * 1988-07-14 1992-01-21 Daikin Industries, Inc. Method and apparatus for applying a shadowing operation to figures to be drawn for displaying on crt-display
JPH0727581B2 (en) * 1988-09-09 1995-03-29 インターナショナル・ビジネス・マシーンズ・コーポレーション Graphic processing device
US5222203A (en) * 1989-01-20 1993-06-22 Daikin Industries, Ltd. Method and apparatus for displaying translucent surface
US5027292A (en) * 1989-04-19 1991-06-25 International Business Machines Corporation Multiple depth buffers for graphics and solid modelling
JPH03127287A (en) * 1989-10-13 1991-05-30 Nec Corp Three-dimensional picture generating device
US5265198A (en) * 1989-10-23 1993-11-23 International Business Machines Corporation Method and processor for drawing `polygon with edge`-type primitives in a computer graphics display system
US5123085A (en) * 1990-03-19 1992-06-16 Sun Microsystems, Inc. Method and apparatus for rendering anti-aliased polygons
US5175805A (en) * 1990-10-30 1992-12-29 Sun Microsystems, Inc. Method and apparatus for sequencing composite operations of pixels
US5268996A (en) * 1990-12-20 1993-12-07 General Electric Company Computer image generation method for determination of total pixel illumination due to plural light sources
US5265199A (en) * 1991-05-22 1993-11-23 Silicon Graphics, Inc. Method and apparatus for accomplishing Z-buffering by prediction

Also Published As

Publication number Publication date
JPH0683979A (en) 1994-03-25
KR970003325B1 (en) 1997-03-17
US5377313A (en) 1994-12-27
EP0553973A2 (en) 1993-08-04
KR930016909A (en) 1993-08-30
EP0553973A3 (en) 1994-05-18

Similar Documents

Publication Publication Date Title
CA2087501A1 (en) Computer graphics display and system with shadow generation
US6437782B1 (en) Method for rendering shadows with blended transparency without producing visual artifacts in real time applications
US7362332B2 (en) System and method of simulating motion blur efficiently
Bishop et al. Fast phong shading
US5704024A (en) Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map
US5805782A (en) Method and apparatus for projective texture mapping rendered from arbitrarily positioned and oriented light source
US5742749A (en) Method and apparatus for shadow generation through depth mapping
KR0156052B1 (en) Texture mapping method and apparatus
US7009608B2 (en) System and method of using multiple representations per object in computer graphics
US6825840B2 (en) System and method of adjusting ray origins when shading vertices with rays
EP0702333A2 (en) Method of drawing shadow and three-dimensional graphic computer system
JPH06348864A (en) Image display apparatus, computer graphic system and image display method
US5428716A (en) Solid-clip methodology and architecture for clipping solid models and displaying cross-sections using depth-buffers
US7889208B1 (en) Z-texture mapping system, method and computer program product
US6614431B1 (en) Method and system for improved per-pixel shading in a computer graphics system
US7158133B2 (en) System and method for shadow rendering
EP0727764A1 (en) 3D graphics apparatus
US20010048444A1 (en) System and method for fast phong shading
JPH0434159B2 (en)
EP0656609B1 (en) Image processing
US6407744B1 (en) Computer graphics bump mapping method and device
KR101118597B1 (en) Method and System for Rendering Mobile Computer Graphic
US6614446B1 (en) Method and apparatus for computing a computer graphics image of a textured surface
US6429872B1 (en) Method and apparatus for representing computer-modeled objects
US5926183A (en) Efficient rendering utilizing user defined rooms and windows

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead