Medion MD 97630
Here you can find all about Medion MD 97630 like manual and other informations. For example: review.
Medion MD 97630 manual (user guide) is ready to download for free.
On the bottom of page users can write a review. If you own a Medion MD 97630 please write about it to help other people. [ Report abuse or wrong photo | Share your Medion MD 97630 photo ]
Medion MD 97630 - Quick Guide, size: 2.7 MB
Medion MD 97630
User reviews and opinions
No opinions have been provided. Be the first and add a new opinion/review.
Rapidly growing in possibilities and scope during the 1970s and 1980s, computer graphics explored more and more complex problems in modeling, rendering and animation. Therefore, computers invested domains where they were previously absent: either traditional image industries, such as publishing, television and movies, or elds demanding visualization such as medicine and engineering. Each time, this caused profound changes in the ways the work was done because this was not a mere improvement over previous practice but a complete redesign of the pipeline. So much that people could clearly distinguish before and after computers entered their professional lives. Now, achievements speak for themselves: from highly detailed images rendered at interactive rates on commodity graphics hardware, leading a booming computer game industry, to special eects allowing movie directors to nearly forget limitations of traditional lm making, thanks to virtual sets and digital crowds, computer graphics is as pervasive as computers in many aspects of our lives. Moreover, the historical goal of physical realism has been met in some cases (Alias|wavefront, 2002a). Nevertheless, the fully computer-generated movie Final Fantasy: The Spirits Within (Sakaguchi and Sakakibara, 2001), whose tag line was fantasy becomes reality, and can be considered as state-of-the-art in physically realistic computer graphics imagery, was still carefully crafted by hand for more than 100 million USD. Actually, the ever increasing complexity of scenes and the raw economic fact that silicon chips are much cheaper than human brains call for innovative solutions to handle the workload. This could be an explanation for the recent rise of the capture paradigm, or the measurement of an augmenting number of real world properties in order to feed an image synthesis process. Capture technologies have already been used for recording human motion, for recovering objects surface geometry or light interaction behavior, but we expect that many other possibilities are waiting at the corner. Moreover, the fact that most of these measurements are image-based, may foretell the advent of a super camera device, where everything could take place at the same time, from the same viewpoint. In the long term, this approach could complement or replace proceduralism as a solution for tackling the problem of creating complex scenes. Thus, one possible answer to the expanding workload is to diminish the amount of human intervention by using even more automatic systems; but another possible answer, which we are interested in, is to present to users better ways for performing their tasks, thereby making their work easier, faster, and more enjoyable. Of course, these answers are not mutually exclusive, and in what follows, we will explore this alternative as one among all the other computer graphics classications.
1.1.2 A Simple Computer Graphics Taxonomy
Building a taxonomy of a problem is an interesting way to focus on the big picture without getting lost in details, but also a powerful tool to discover fresh ideas (Ivan E. Sutherland, cited by Sun microsystems, 2002). In order to do this, one must dene a few axes, each representing a separate dimension of the problem. The combinations
1.1. User-centered Computer Graphics
of the axes values provide the dierent classes, or taxons, of the taxonomy. A careful inspection of these may lead to unexpected insight. If we are considering the computer graphics input problem, we can dene an axis that focuses on the amount of human intervention, and goes from fully automated systems, which do not need user input, to fully controllable systems which require user input from the beginning to the end of the task. To sum up, this axis could also be dened by the following opposing pairs: with user without user controllable automatic human-centered machine-centered Another interesting axis that can be dened around the input problem concerns how much knowledge of the world1 is encoded into the system: does the system generate new images because it has been given examples of expected results, or because it has the knowledge of the laws of optics? As before, one can clarify this opposition using a few word pairs: with knowledge procedural rule-based without knowledge statistical data-based
From this simple two-axis taxonomy, it seems obvious that a system requiring user input can be obtained by a wide diversity of computer programs because, theoretically, axes of the taxonomy are orthogonal descriptions of the same problem, and thus all combinations are possible. But it is not what is observed in practice. In fact, some programs lend themselves easily to user interaction while others do not. If our primary concern is the way the user interacts with the system, that is the system interface, we will always face limitations, since points of interaction we can oer are constrained by those oered by the program itself and its underlying model. Therefore, in designing a system, we cannot consider the model and the interface separately, because somehow the model is the interface. This claim has a direct consequence on the problem solving process in computer graphics. It involves devising the solution to a problem not only in terms of the strict requirements of the end result (e.g., images of trees) but also in terms of the kind of interaction that will be necessary for a user to actually accomplish the task (e.g., model and render trees). This seems trivial but it is not: because the choices made during the model denition phase have already reduced the set of possible interactions, it is too late to change anything if in the end the interface proves wrong. In that respect, computer graphics is a tool making activity, and the tools produced must obey general usability principles, but also specic ones. We will discuss this point of view in the next section.
P x e3 e1 e2 B x
Figure 2.2: Elasticity denitions. A body initially in reference conguration B at time t = 0 undergoes deformation and is now in deformed conguration B at time t. Position x = xi ei of material point P in reference conguration is expressed in a Lagrangian coordinate system; position x = xi ei of material point P in deformed conguration is expressed in an Eulerian coordinate system. Displacement of material point P at time t is u = x x Small Deformation Elasticity The strain tensor describes, given a point of the material, the deformation w.r.t. the rest state, in every possible direction. It is represented by a symmetric matrix with six strain components: three principal strains (diagonal terms of the matrix) and three shear strains. A classical strain metric is the Cauchy strain tensor i j dened by
1 ui u j + 2 x j xi
ui x j
1 i, j
u where x ij is the derivative of the ith coordinate of the displacement ui w.r.t. the jth coordinate of the reference conguration. The displacement u i is dened for every point as the vector between reference and deformed positions. The stress tensor describes, given a point of the material, the force acting on every innitesimal surface element (stress is thus equivalent to a pressure). It is represented by a 33 symmetric matrix with six stress components: three normal stresses (diagonal terms of the matrix) and three shear stresses. A classical stress metric is the Cauchy stress tensor i j dened by
ti = i j n j where ti is the traction vector, i.e., areal density of force and n j is the unit normal to the surface. The simplest stress-strain relationship is Hookes law, a linear constitutive law rst stated in the one-dimensional case of springs. In the general case of three-dimensional bodies, it states that stress is proportional to strain, i.e.,
i j = Ci jkl
where Ci jkl is the tensor of elasticity, a rank four symmetric tensor, with thirty-six dierent terms, since i j and kl are rank two symmetric tensors with six dierent terms. It is the constitutive law for a linear, elastic, anisotropic and homogeneous material. When the material is assumed to be isotropic, i.e., the material behavior is the same in every direction, only two elastic constants are necessary to represent the behavior and the constitutive law becomes i j =
i j + 2
with i j =
1 if i = j 0 if i j
where and are the Lam elastic constants (equivalent to pressures) and i j is the Kronecker delta. In practice, two other elastic constants are determined by rheological experiments: Youngs modulus, that gives a measure of material rigidity, and Poissons ratio, that gives a measure of its incompressibility. Lam constants can be expressed using these constants, thus = E (1 + ) (1 2) and = E 2 (1 + v)
is very slow. Debunne et al. (2001) reuse the previous approach to simulate soft materials but speed it up and reach interactivity with multiresolution techniques. However, this requires precomputing a mesh hierarchy, and thus prevents any change in object topology during simulation. Related to explicit FEM, the tensor-mass model of Cotin et al. (1999) simulates dynamic deformations of an isotropic material with a linear constitutive equation, under the small deformation hypothesis. The main idea is to split, for each tetrahedron, the force applied at a vertex in two parts: a force created by the vertex displacement (w.r.t. rest position) and a force produced by the displacement of its neighbors, i.e., vertices that are linked to it by mesh edges. This is possible using local tensors that relate force to displacement. Evaluation of these tensors requires evaluation of the stiness matrix associated to each tetrahedron adjacent to one of the edges linked to the vertex. New positions of the vertices are obtained from forces using a dynamic explicit integration scheme. Picinbono et al. (2000, 2001) have improved this model to handle anisotropic behavior and nonlinear elasticity under the large deformation hypothesis. For this, strain is measured using Green-Lagrange strain tensor (called Green-St Venant by the authors) and elastic energy is rewritten from a quadratic function into a fourth order polynomial of the displacement gradient (St Venant-Kircho elasticity). Transversally isotropic materials are supported. This a special case of anisotropy where a material has a dierent behavior in one given direction (see Fig. 2.3a). To optimize computation time, nonlinear elasticity is used only for nodes whose displacement is larger than a given threshold, otherwise linear elasticity is considered as a suciently good approximation (see Fig. 2.3b). Finally, for the sake of completeness, we mention continuous models that simulate only global deformation without using FEM for solving elasticity equations. Pentland and Williams (1989) simulate global deformation by modal analysis. The Lagrange equation of motion is rewritten by diagonalizing the global mass, damping and stiness matrices. Thus, the previous system is now composed of independent equations, each describing a dierent vibration mode of the object. Linear superposition of these modes determines how the object responds to a given force; thus, neglecting the high frequency, low amplitude modes, it allows interactive simulation. Terzopoulos et al. (1987) propose a model based on minimization of deformation energy that is more concerned with dierential geometry properties of objects (either curves, surfaces or solids) and use only the analysis of deformations part of the linear elasticity framework. Dynamic dierential equations are solved using nite dierences method and implicit integration, thus limiting the model to regular meshes and non-interactive applications. Discussion In continuum mechanics, hyperelasticity theory is used for nonlinear elastic materials under the large deformation hypothesis. Constitutive equations are written us-
point mass structural spring shear spring
rameters from geometric data is based on a cost function which measures the dierence in behavior between the reference and the model, and an evolutionary minimization algorithm. On a 1717 mesh, convergence is obtained after about 50 to 100 generations. Deussen et al. (1995) use simulated annealing to obtain optimal mass-spring systems approximations of deformable bodies obeying to linear elasticity. It is a two-step process: rst, nd positions and masses of the points that approximate the mass distribution, second, dene the topology of the connections and optimize their spring constants. Four test congurations were used for optimizing elasticity on a 2D deformable body: two with stretching loads and two with shearing loads. The quality criterion used is the standard deviation between actual and reference displacements of all points. This method allows approximation of homogeneous as well as inhomogeneous and anisotropic materials. Two-dimensional mass-spring systems containing up to some hundred points are optimized successfully and an extension to 3D with nine basic loads is suggested but not tested, due to the large computational cost. These two optimization methods prove the possibility of approximating mechanical behaviors with mass-spring systems but van Gelder (1998) demonstrates the impossibility of setting the stinesses of a mass-spring system to obtain an exact simulation of the elastic material properties of a continuous model. However, this doesnt mean that global behavior, i.e., stress-strain relationships, cannot be reproduced with a mass-spring system. In fact, Boux de Casson (2000) simulated linear and nonlinear stress-strain relationships using linear and nonlinear spring laws, proving that the behavior at the spring level is conserved at the object level. Finally, since mass-spring system behavior changes when topology or geometry of the mesh is modied, dynamic behavior at dierent resolutions is dierent. Thus, very few papers address the issue of physical simulation at multiple levels of detail with mass-spring systems. Hutchinson et al. (1996) propose a scheme for adaptively rening portions of mass-spring systems to a required accuracy, producing visually more realistic results at a reduced computational cost. Detection of inaccuracy is performed using an angle criterion between springs joining a mass from opposite directions. The response is the addition of masses and springs around the area where the discontinuity occurs: point masses keep the same value but spring stinesses double at each level of renement to prevent regions of increased mass from behaving dierently. However, this approach is possible only for quadrilateral or hexahedral meshes (here, for simulating a deformable sheet) because they lend themselves naturally to regular subdivision. Debunne has submitted a mass-spring system to the same multiresolution aptitude test that was used for continuous models. Results demonstrated without ambiguity that the motion of an oscillating mass-spring system cannot have the same frequency and amplitude at dierent mesh resolutions.
Overview The central idea of our approach is to represent strokes in 3D space, thus promoting the idea of a stroke to a full-edged 3D entity. Even in 3D, we think that strokes are an excellent way to indicate the presence of a surface silhouette: several neighboring strokes reinforce the presence of a surface in the viewers mind, while attenuated strokes may indicate imprecise contours or even hidden parts. Finding surface properties of objects from their silhouette is a classic hard problem in computer vision. The algorithms presented here do not address this issue, since our goal is to develop a drawing system rather than to perform geometric reconstruction. As a consequence, we develop approximate solutions that are appropriate in the context of interactive drawing and sketching. To enable the user to view stroke-based sketches from multiple viewpoints, we interpret 2D silhouette strokes as curves, and use a curvature estimation scheme to infer a local surface around the original stroke. This mechanism permits ecient stroke-based rendering of the silhouette from multiple viewpoints. In addition to stroke deformations, this includes variation of intensity according to the viewing angle, since the precision of the inferred local surface decreases when we move away from the initial viewpoint. It also includes relative stroke occlusion, and additive blending of neighboring strokes in the image. Apart from silhouette strokes, our system also provides line strokes that represent 1D elements. These have the ability to remain at a xed position in space while still being occluded by surfaces inferred using silhouette strokes. They can be used to add 1D details to the sketches, such as the arrow symbols in the example of annotation (see Fig. 3.21). Because strokes have to be positioned in space, we present an interface for 3D stroke input. The user always draws on a 2D plane which is embedded in space. This plane is most often the screen plane, selected by changing the viewpoint. The depth location of this plane can be controlled either explicitly via the user interface
3.2. Previous Work
or implicitly by drawing onto an existing object. The user may also draw strokes that are not in the screen plane, but that join two separate objects. The combination of fast local surface reconstruction and graphics hardware rendering with OpenGL results in truly interactive updates when using our system. Finally, we show that our method can be applied to artistic illustration as well as annotation of existing 3D scenes, e.g., for rough landscaping or educational purposes. An existing 3D object can also be used as a guide to allow the design of more involved objects, e.g., using a model mannequin to create 3D sketches for clothing design. Part of this work has been previously published in the journal Computer Graphics Forum, Eurographics conference issue (Bourguignon et al., 2001).
3.2 Previous Work
Our work is a natural continuation of 3D drawing or sketching tools which have been developed in computer graphics over the last few years. Before giving an overview of the related papers, we would like to recall the pioneering work of Sutherland in this area, forty years ago (Sun microsystems, 2002). They shaped the two main trends in 3D drawing interfaces we still see today. In 1963, using the high-end TX-2 computer, Sutherland invented the rst interactive computer graphics application, which he dubbed Sketchpad (see Fig. 3.2a). The TX-2 computer, at the Lincoln laboratory of Massachusetts Institute of Technology (MIT), was one of the few computers of the day that could run on line instead of only crunching batch jobs. It had huge memory capacity, magnetic tape storage and various input and output devices; among them, two extremely important pieces of equipment: a lightpen and a nine-inch cathode-ray tube (CRT) display. Using this simple but powerful interface and the Sketchpad program, precise engineering drawings could be created and manipulated. Many concepts that are now common in GUI were dened by this revolutionary software, e.g., rubber-banding of lines, zoom in and out, automatic beautication of lines, corners and joints, etc. A few years later, in 1968, Sutherland presented the rst computer head-mounted display (see Fig. 3.2b). This work was inspired by early experiments, such as a remote perception project at Bell Helicopter Company, where HMDs were used to control distant cameras. Replacing the real world images by computer-generated images let the user enter the rst virtual reality (VR) environment, composed of a single wireframe room with one door and three windows in each of the cardinal directions. Nowadays researchers have realized the importance of providing usable tools for the initial phase of design, beyond traditional 3D modeling. These tools have taken the form of 3D drawing or sketching systems, using direct 3D input, where the user draws in 3D and the computer gives him the necessary visual feedback, or 2D stroke interfaces, where the user draws in 2D and the computer infers 3D strokes or a 3D object.
Figure 3.7: Results from Pugh (1992). From left to right: a three-dimensional object is inferred by the Viking system using geometric constraints, either implicitly derived from the drawing or explicitly specied by the user, e.g., hidden edge or redundant edge identication. Akeo et al. (1994) describe a system that uses cross-section lines on a designers drawing to generate automatically a three-dimensional model of the object (see Fig. 3.8). The input sketch is analyzed as follows. First, the lines are extracted from the drawing using image processing techniques. Then, three-points perspective information associated with shape cross-sections is used to infer relative position of the lines in three dimensions. Finally, closed loops are detected to create B-spline surfaces. The authors description is not very detailed but they stress the problem of system sensitivity to sketch inaccuracies, e.g., inconsistent vanishing points and varying line widths. The user provides missing data when automatic sketch processing fails.
Figure 3.8: Results from Akeo et al. (1994). From left to right: idea sketch, sketch augmented
with shape cross-section lines, 3D model editing interface.
The IDeS (intuitive design) system (Branco et al., 1994) combines sketch input with common features of solid modelers, such as constructive solid geometry (CSG) operators. The user can perform four dierent tasks with the system. First, sketching a 3D model, as in a classic 2D drawing program: the object must be drawn without hidden lines and in general position, i.e., a position avoiding alignment between edges and vertices. Each time the user draws a line, junctions are analyzed and classied. A junction dictionary stores information on each junction type that will be used by the reconstruction. When the user has nished, the system attempts a reconstruction in two steps: compute the fully and partially visible faces; infer the hidden faces of the model. Second, using a modeling tool: basic shapes such as extruded solids can be constructed directly with the appropriate tool. If information is missing to apply the modeling operation, the system will wait for the user to provide it by drawing. Third, editing a 3D model: various editing operations are available, e.g., direct drawing over the surface of the model (gluing). Fourth, explaining a sketch to the system: the is a operator allows to distinguish between 2D and 3D models, such as a circle and a sphere, or to identify regular shape from imprecise input, such as a straight line segment from a freehand line. Eggli et al. (1995, 1997) present a 2D and 3D modeling tool that takes simple pen strokes as input. A graph-based constraint solver is used to establish geometrical relationships and to maintain them when objects are manipulated. Two-dimensional shapes, such as line, circle, arc or B-spline curve, and geometrical relationships, such as right angle, tangency, symmetry and parallelism are interpreted automatically from the strokes. This information is used to beautify the drawing and establish constraints (see Fig. 3.9a, top). Since inferring a 3D object from an arbitrary 2D input is impossible in the general case, specic drawing techniques that have an unambiguous interpretation in 3D are used. Extrusion surfaces are generated by sweeping a 2D prole along a straight line; ruled surfaces are dened between two curves; sweep surfaces are created by sweeping a cross-section along a curve; revolution surfaces are determined using two approximately symmetric silhouette lines (see Fig. 3.9a, bottom). The user can also draw lines on faces of existing objects. Tolerance in strokes interpretation is necessary to cope with inexact input. However, if interpretation does
have described the eect on the viewer of adjusting the degree of precision in the rendering of a scene, to produce images ranging from rough charcoal sketches to detailed pen-and-ink illustrations. The former are more suitable to convey a work in progress feeling than the latter, since information transmitted is less precise. Nonetheless, both are rendered using the same geometric data. Why is it necessary to build a complete model to render a rough sketch? Arent there weaker forms of knowledge about the geometry that would suce? We see this as an open problem, involving human cognition issues: how much information about an object is really needed to produce a draft of it? And one of its subproblems concerns mapping from geometry space to drawing space: can all drawings be generated from geometrical information only?
3.3 Drawing and Rendering 3D Strokes
In order to render a sketch from multiple viewpoints, we consider strokes as threedimensional entities. Two kinds of strokes are used in our system: line strokes that represent 1D detail, and silhouette strokes that represent the contour of a surface. This is the case for both open and closed strokes. For line strokes, we use a Bzier space curve for compact representation. These strokes are rendered using hardware, and behave consistently with respect to occlusion. Silhouette strokes in 3D are more involved: a silhouette smoothly deforms when the view-point changes. Contrary to line strokes, a silhouette stroke is not located at a xed space position. It may rather be seen as a 3D curve that slides across the surface that generates it. Our system infers the simplest surface, i.e. the same local curvature in 3D as that observed in 2D. For this we rely on the dierential geometry properties of the user-drawn stroke, generating a local surface around it. But the degree of validity of this surface decreases when the camera moves. Therefore, we decrease the intensity of the silhouette as the point of view gets farther from the initial viewpoint. This allows the user to either correct or reinforce the attenuated stroke by drawing the silhouette again from the current viewpoint.
3.3.1 Local Surface Estimation from 2D Input
Since the inferred local surface will be based on the initial stroke curvature, the rst step of our method is to compute the variations of this curvature along each 2D silhouette stroke drawn by the user. We start by tting each 2D silhouette stroke segment to a piecewise cubic Bzier curve. This representation is more compact than a raw polyline for moderately complex curve shapes. The tting process is based on the algorithm of Schneider (1990a); we briey review it next. First, we compute approximate tangents at the endpoints of the digitized curve. Second, we assign an initial parameter value to each point using chord-length parameterization. Third, we compute the position of the second and third control points of a Bzier curve by minimizing the sum of the squared distance from
3.3. Drawing and Rendering 3D Strokes
each digitized point to its corresponding point on the Bzier curve. Fourth, we compute the t error as the maximum distance between the digitized and tted curves; we note the digitized point of maximum error. Fifth, if this error is above threshold, we try to improve the initial parameterization by a nearest-point-on-curve search using a Newton-Raphson method (see below) and search a new Bzier curve; if this fails, we break the digitized points into two subsets and recursively apply the t algorithm to the subsets. Then, each control point Vi of the piecewise cubic Bzier curve Q3 is associated with a given value of the parameter u along the curve. From the denition of a cubic Bzier curve (see Appendix B), we obtain immediately uV0 = 0 and uV3 = 1, but uV1 and uV2 are not dened because Bzier curves are approximation splines. However, we can determine a parameter value corresponding to the point on the curve nearest to the control point. For this, we apply the method of Schneider (1990b): we look for the values of u that are roots of the equations [Q3 (u) V1 ] Q3 (u) = 0 and [Q3 (u) V2 ] Q3 (u) = 0
since they dene the parameters value for points on the curve nearest to each control point. These roots can be approximated using the Newton-Raphson method, a classic one-dimensional root-nding iterative routine. The initial estimates for roots are obtained with simple trigonometry (see Fig. 3.13) u= V (V3 V0 ) (V1 V0 ) V3 Vand u= 1 V (V0 V3 ) (V2 V3 ) V0 V3 2
V2 V3 PV
V1 PV PV
Figure 3.13: Solving the nearest-point-on-curve problem (Schneider, 1990b). Parameter values for points on cubic Bzier curve nearest to control points V 1 and V2 are obtained using the Newton-Raphson method. Initial estimates for the parameters are u = PV1 V0 and V u = P V2 V 0. V
For each parameter value u associated with a control point V, we nd the center of curvature C = T by rst computing the derivatives of the position coordinates and then solving the following equations (Bronshtein and Semendyayev, 1998):
y x2 + y 2 xy yx
x x2 + y 2 xyyx
where x and x are rst and second derivatives of x with respect to u. Therefore, we obtain a curvature vector between a point on the curve at parameter u and its associated center of curvature C (see Fig. 3.14a). We will be using these curvature vectors to reconstruct local 3D surface properties. However, if the stroke is completely at, the norm of the curvature vector, i.e., the radius of curvature, becomes innite; the method we present next solves this problem. In order to infer a plausible surface in all cases, we use a heuristic based on the curves length to limit the radius of curvature. One way of looking at this process is that of attempting to t circles along the stroke curve. Thus, if we encounter many inection points, the circles tted should be smaller, and the local surface should be narrower; in contrast, if the curve has few inection points, the local surface generated should be broader. To achieve this, we construct axis-aligned bounding boxes of the control polygon of the curve between each pair of inection points. Inection points can be found easily since we are dealing with a well-dened piecewise cubic Bzier curve (see Appendix B). They are either the common control point of two head-to-foot cubic Bzier curves of type I (see Fig. 3.13, left) or are located on a cubic Bzier curve of type II (see Fig. 3.13, right). We discard bounding boxes which are either too small or too close to the curve extremities. If the norm of the curvature vector is larger than a certain fraction of the largest dimension of the bounding box computed previously, it 1 is clamped to this value (see Fig. 3.14b). We use a fraction value at most equal to 2 , which gives a length equal to the radius of a perfect circle stroke. We also impose a consistent in-out orientation of the curve based on the orientation of the curvature vectors in the rst bounding box computed, thus implicitly considering initial user input as giving correct orientation (see Fig. 3.14c). This intuitive choice corresponds to the intent of the user most of the time. If not, a button in the GUI can be used to invert all the curvature vectors along the stroke. From these 2D strokes, we infer local surface properties, which are then used to create a 3D stroke representation. Each center of curvature embedded in the drawing plane is considered as the center of a circle in a plane perpendicular to the drawing plane and passing by the corresponding control point (see Fig. 3.15a). We consider an arc of 2 radians for each circle, thus dening a piecewise tensor product surface 3 by moving each control point on its circle arc (see Fig. 3.15b). This piecewise Bzier surface is quadratic in one dimension, corresponding to a good approximation of a circle arc, and cubic in the other, which corresponds to the stroke curve. To dene the quadratic Bzier curve easily, we express the position of its middle control point as a ratio of the height of the equilateral triangle whose base is dened by the two other control points, of known positions (see Fig. 3.15c). We found the optimal ratio iteratively by measuring the maximum distance between points on the Bzier and on
A matte is an image or signal that represents or carries only transparent information that is intended to overlay or control another image or signal (Hapeman et al., 2001). An acceptable synonym for a binary matte is a mask.
negating the result, and masking it with the original matte. As opposed to low pass ltering, pyramidal ltering adapts the shading to the scale of the silhouette, so that shading is correct for large regions as well as small ones (pyramidal airbrushing). Techniques that infer a 3D model from outlines (silhouette ination), allow 3D shading algorithms to be applied in 2D animation. Using a classic reference image (see Fig. 4.4a), a possible solution to infer a 3D surface is the shape from shading method from computer vision (see Fig. 4.4b). The author suggests several other methods: automatic segmentation into superquadrics (by correlating the image with a family of superquadrics silhouettes), manual segmentation into symmetry seeking generalized cylinders (user-dened cylinders are then automatically t to image segments), and automatic ination by masked pyramidal convolution. This last technique applies a series of Gaussian lters of decreasing radii to each region considered separately from the others. After each pass, the blurred image is masked with the original image to limit ination to the inside of silhouette (see Fig. 4.4c and Fig. 4.4d).
Figure 4.4: Results from Williams (1991). Various inations of Pablo Picassos Rite of
Spring (a), a classic reference image: using the shape from shading algorithm (b); using a masked pyramidal convolution, ination displayed as an image (c), or as a relief (d), with non-standard video hardware (Williams, 1990).
GRADED (van Overveld, 1996) is an interactive system for designing freeform surfaces using shading information. Interestingly, the author notes that current computer design tools have direct manipulation techniques at the antipodes of those of non-computer tools. In fact, with CAD systems, techniques for designing surfaces are
more frequently 0D (control points) than 1D (boundary curves of Coons and Gordon patches), and are exceptionally 2D (such as Williamss 3D paint), as opposed to what is observed with traditional design tools. In GRADED, surfaces are represented as heightelds and manipulated by editing a depth buer or a gradient buer. To compute local illumination, color in each point is considered as a function of the normal vector, obtained using the gradient value. Editing the depth buer automatically updates the gradient buer and thus has a visible eect on surface shading. Classic paint system tools and image processing operations are available: direct color editing (tinting), opaque painting, smoothing, etc. But they must be interpreted in a depth image context, e.g., tinting (adding color) corresponds to rising (adding material). Brushes are fully congurable in shape, orientation, size, and prole. Conversely, editing the gradient buer allows the surface to be modied by modifying its shaded image. Since there is a one-to-one relation between shade color, normal vector and gradient, a shape from shading algorithm is not necessary. The user simply selects a shade color (corresponding to a unique orientation) on an illuminated sphere, and paints it in the gradient buer. To enforce that the current gradient distribution corresponds to a consistent depth distribution, the conservation constraint (stating that the accumulated depth variation over any closed loop must be equal to zero) is propagated over the surface by an iterative algorithm (see Fig. 4.5a). Surfaces created with GRADED can be converted into gradient maps, for bump mapping, or into triangular meshes. However, the author notes that this system is more adapted to rening existing polygonal surfaces (previously scan converted to depth maps), than to sculpting entire shapes from scratch (see Fig. 4.5b and Fig. 4.5c).
Figure 4.7: Results from Johnston (2002). Normals approximation: original drawing (a),
region-based or simple blobby normals (b), compound region-based or compound blobby normals (c), line-based or quilted normals (d), and blended normals (e).
operations: push, pull, smooth, and erase. Pushing translates vertices in the direction of the tool reference vector, of an amount dependent on their current displacement w.r.t. reference surface, and in a way dependent on the brush stamp prole (radius, opacity, shape); while pulling does the same thing, but in the opposite direction (see Fig. 4.8a). The surface subdivision density has an inuence over the precision of the result (see Fig. 4.8b). The reference vector is dened in the tool settings, and possible values are: surface normal, surface normal at the beginning of the stroke, camera view vector, x, y or z axis (see Fig. 4.8c). The reference surface is dened as the surface at the beginning of the sculpting session. Its vertices cannot be translated any further than the maximum displacement value in the tool settings. However, a larger displacement is possible if the user updates the reference surface during the session. It can be done automatically after each stroke so that strokes displacements become additive. Smoothing reduces bumps in the surface. It can be set to be applied automatically after each stroke. Erasing resets vertex displacements to their values in the erasing surface (see Fig. 4.8d and Fig. 4.8e). This surface is equivalent to the reference surface of push-pull operations. Initially, they are identical, thereafter, the erasing surface is updated independently. Other operations include: creating masks, to prevent areas of the surface from being aected by sculpting; ooding the surface, to apply the current brush operation to the entire surface. ZBrush (Pixologic, 2002) is a modeling software with a painting metaphor. The polygon sculpting capabilities of ZBrush are very similar to Mayas Artisan Sculpt Polygons Tool (Alias|wavefront, 2002b), presented previously. Starting with any 3D polyhedron (over a dozen primitives are already proposed, such as spheres, cylinders, cubes, etc., with variable mesh density) the user can sculpt the shape by pushing and pulling vertices, i.e., by painting onto the object with push and pull brushes. Higher level deformation tools are also provided to twist, bend, or inate the shape, and even simulate gravity (see examples on Fig. 4.9). The polygonal surface can be smoothed, rened or simplied. These modications can be constrained using axial or radial symmetries, or restricted to a region by masking. Masks can be dened directly by painting on the object. However powerful these sculpting tools are, the real innovation
a number that is the distance between that pixel and the nearest nonzero pixel of bw. fspecial(gaussian, hsize, sigma) returns a rotationally symmetric Gaussian low pass lter of size hsize with positive standard deviation sigma. If hsize is a scalar, the lter is a square matrix. imfilter(a, h, replicate) lters the multidimensional array a with the multidimensional lter h. With the replicate option enabled, input array values outside the bounds of the array are assumed to equal the nearest array border value.
Figure 4.12: Inferring a height eld. This is done in three steps. First, from the drawing mask (a), one obtains its Euclidean distance transform (b), which is mapped to a unit sphere height eld (c), and then adaptively low pass ltered (d). Second, the drawing image (e) is ltered (f) with the same lter that was used in (d). Third, the previous height eld (d) is used as a matte of the ltered image (f) to give the nal height eld (g).
Height Field Polygonal Approximation Finally, using an algorithm inspired from Garland and Heckbert (1995), a polygonal surface approximating the previous height eld is computed, minimizing both the error and the number of triangles. Starting from the previous constrained Delaunay triangulation (see Section 4.3.2) as an initial approximation, the algorithm, at each iteration, nds the input point with the highest error in the current approximation and insert it as a new vertex in the triangulation (see Algorithm 4.3 and results on Fig. 4.13). The error metric used is the L error, also called the maximum error. For two vectors of
% input: two image files (drawing image and drawing mask) % output: a two-dimensional array of real values (height field) image = im2double(imread("image.png")); mask = im2double(imread("mask.png")); % Euclidean distance transform dist = bwdist(mask); dmax = max(dist(:)); % mapping to unit sphere height field sphere = sqrt(1 - (1 - dist/dmax).2); hsize = round(dmax/2); sigma = hsize/2; % two-dimensional Gaussian filter h = fspecial(gaussian, hsize, sigma); spheref = imfilter(sphere, h, replicate); imagef = imfilter(image, h, replicate); field = imagef.*spheref;
Algorithm 4.2: MATLAB code for computing height eld. Most of the functions come from
Chapter 5. Conclusion and Future Work
ble system for this thesis defense. As a rst future work, the two aspects of our thesis, animation and modeling, could be integrated in a system of animated schemas, that would be very helpful in the teaching of dynamical phenomena such as the cardiac beat. This system would close the loop between the other systems we have proposed: a teacher could rst draw a 3D illustration, then specify parameters by drawing, and nally animate it, thus creating an animated functional schema that can be observed from several viewpoints. Our second future work is more involved. In order to evaluate pedagogical applications of these techniques in medical school curriculum, and in particular for anatomy courses, we would like to test our software tools in a real-life setting with a group of volunteer students. This testing would have to be performed according to rigorous ergonomics methodologies, as we advocated in Section 1.2. Finally, we wonder about the possible future of these technologies in the classroom. Before risking this perilous exercise, examining the present situation will give us a good start. Today, the computer has entered the classroom. Long ago, the old blackboard, and dusty, inexpensive chalk, were replaced by the new whiteboard, and the solvent-free, expensive marker. Cleanliness replaced expressiveness (try to obtain with a marker the subtle stroke eects you can achieve with chalk). But the whiteboard triumph was short. Now, the poor whiteboard is often used as a projection screen that reects digital slides projected from a computer. Thus, in a way, students are now subject to conditions quite equivalent to those of a movie theater: a bright screen displaying nice pictures, a voice commenting what is on the screen, darkness for a good contrast, and, unfortunately, either passivity (watching the show), or discouragement (too much information, too fast, therefore impossible to take notes). Indirectly, this also has consequences on teachers: the pedagogical drawing know-how slowly disappears with the retiring faculty members. Reintroducing writing and drawing as fundamental learning tools is one solution for reversing the trend. In the classroom of tomorrow, the motto could be drawing as a front-end to everything (Gross and Do, 1996), either as traditional paper and pencil, as computer-assisted two-dimensional input, or even as something we cannot imagine yet. There are so many possibilities. Drawing: not only because it is one of the few thought supporting activities;1 not only because it is one of the few elementary interfaces;2 but also simply because it is part of our human nature.3
VP-DX105I HD7546 EP763 RD-XS27KE GA-M61pme-s2 DJ-S41 S1 125 KT7A-raid E2472HD-1 LBN20518ST IC-F4022S VCM7177 54T ADA890 Motorokr W5 5500DN Aspire T310 Soundchart Roland MD-8 Tachometer 500 PCI Proheat 7901 Roland MOC1 85962 Destiny DSC-T3 CLP-550 2333SW Plus WS220 VPL-CX10 WR400F-1999 GR-DV700 EG-800 5 5 GZ-MG30 UA-3FX Axim X5 DVA-9965R Olympus E-P1 RB-1090 CMT-DH5BT BD-P3600A 23PF8946 RL55vqbus Profile 637 RH398H Deskpro 2000 V1 1 XL2370 Optio S55 SE2551B 53 S-HTD540 G 302 Rout 54-N SHR-7162P MPS-2240 CF-29HTM50BM EG-101 Ferguson TE20 GF7050V-m7 SE L-398A Piper J-3 8HP-2004 Stylus-7030 LE37B650t2W Review SE535 HD403LJ 605CS 105T RT-29FD15R ECM-CG1 Discovery Tree VGN-FZ21MR Ricoh 480W Casio 4765 D44000WXA SGH-E530 GN45-P-050 DCR-PC4E Security J424 PSC 1219 IP2000 Chronographs STR-D350Z DPL680 MD-X8 421 E Management Nightbass LE26B450 Voice DUO DI450 Laserjet 1010 CT-600J IP5000 20HF5234 Env06 WJ-HD316 V2-0-D CX21NF HM080HC Photosmart 8750
manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101