카메라 캘리브레이션
카메라 캘리브레이션으로 3차원의 공간과 그것을 바라보는 카메라의 위치관계를 파악할 수 있다.
기초 중 기초인 것을 다시 확인하게 되었다.
도움되는 글.
http://openshareit.tistory.com/entry/Camera-Calibration-with-Homography
http://darkpgmr.tistory.com/32
카메라 캘리브레이션으로 3차원의 공간과 그것을 바라보는 카메라의 위치관계를 파악할 수 있다.
기초 중 기초인 것을 다시 확인하게 되었다.
도움되는 글.
http://openshareit.tistory.com/entry/Camera-Calibration-with-Homography
http://darkpgmr.tistory.com/32
가속도 센서
[PDF]
안드로이드로 중력가속도 계산하기
http://www.tipssoft.com/bulletin/board.php?bo_table=FAQ&wr_id=1046
자이로 센서
http://tom.pycke.be/mav/70/gyroscope-to-roll-pitch-and-yaw
As you probably remember from you physics class, position, velocity and acceleration are related to eachother: deriving the position, gives us velocity:
d x = v x
with x being the position on the x-axis and v x being the velocity along the x-axis.
Maybe less obvious, the same holds for angles. While velocity is the speed at which the position is changing, angular rate is nothing more than the speed the angle is changing. That’s right:
d alpha = angular rate = gyroscope output
with alpha being the angle. It’s starting to look pretty good! Knowing that the inverse of deriving (d .) is integrating (∫), we change our formula’s into:
∫ angular rate = ∫ gyroscope output = alpha
Woohoo, we found a relation between angle (attitude!) and our gyroscope’s output: integrating the gyroscope, gives us our attitude-angle.
칼만 필터 자료 모음
http://blog.naver.com/hangondragon/
About Kalman Filter
http://tom.pycke.be/mav/71/kalman-filtering-of-imu-data
In the tutorial on gyroscopes, we saw that the bias drifts. Well, here comes the kalman-magic: the filter will adjust the bias in each iteration by comparing the result with the accelerometer’s output (our second input)! Great!
kalman filter source code
https://sites.google.com/site/jordiuavs/Home/kalman_filter_by_Jordi.txt?attredirects=0
http://www.cs.unc.edu/~tracker/media/pdf/SIGGRAPH2001_CoursePack_08.pdf
http://bilgin.esme.org/BitsBytes/KalmanFilterforDummies.aspx
http://realsys.co.kr/data/hobby/6_%EC%B9%BC%EB%A7%8C%ED%95%84%ED%84%B0.pdf
Apple iPad vs. iPad 2 | ||||
Apple iPad (PowerVR SGX 535) | Apple iPad 2 (PowerVR SGX 543MP2) | |||
Array test - uniform array access |
3412.4 kVertex/s
|
3864.0 kVertex/s
| ||
Branching test - balanced |
2002.2 kShaders/s
|
11412.4 kShaders/s
| ||
Branching test - fragment weighted |
5784.3 kFragments/s
|
22402.6kFragments/s
| ||
Branching test - vertex weighted |
3905.9 kVertex/s
|
3870.6 kVertex/s
| ||
Common test - balanced |
1025.3 kShaders/s
|
4092.5 kShaders/s
| ||
Common test - fragment weighted |
1603.7 kFragments/s
|
3708.2 kFragments/s
| ||
Common test - vertex weighted |
1516.6 kVertex/s
|
3714.0 kVertex/s
| ||
Geometric test - balanced |
1276.2 kShaders/s
|
6238.4 kShaders/s
| ||
Geometric test - fragment weighted |
2000.6 kFragments/s
|
6382.0 kFragments/s
| ||
Geometric test - vertex weighted |
1921.5 kVertex/s
|
3780.9 kVertex/s
| ||
Exponential test - balanced |
2013.2 kShaders/s
|
11758.0 kShaders/s
| ||
Exponential test - fragment weighted |
3632.3 kFragments/s
|
11151.8 kFragments/s
| ||
Exponential test - vertex weighted |
3118.1 kVertex/s
|
3634.1 kVertex/s
| ||
Fill test - texture fetch |
179116.2 kTexels/s
|
890077.6 kTexels/s
| ||
For loop test - balanced |
1295.1 kShaders/s
|
3719.1 kShaders/s
| ||
For loop test - fragment weighted |
1777.3 kFragments/s
|
6182.8 kFragments/s
| ||
For loop test - vertex weighted |
1418.3 kVertex/s
|
3813.5 kVertex/s
| ||
Triangle test - textured |
8691.5 kTriangles/s
|
29019.9 kTriangles/s
| ||
Triangle test - textured, fragment lit |
4084.9 kTriangles/s
|
19695.8 kTriangles/s
| ||
Triangle test - textured, vertex lit |
6912.4 kTriangles/s
|
20907.1 kTriangles/s
| ||
Triangle test - white |
9621.7 kTriangles/s
|
29771.1 kTriangles/s
| ||
Trigonometric test - balanced |
1292.6 kShaders/s
|
3249.9 kShaders/s
| ||
Trigonometric test - fragment weighted |
1103.9 kFragments/s
|
3502.5 kFragments/s
| ||
Trigonometric test - vertex weighted |
1018.8 kVertex/s
|
3091.7 kVertex/s
| ||
Swapbuffer Speed |
600
|
599
|
Clash Of The Titans - Release The Kraken from Pleribus on Vimeo.
Clash of the titans promotion.
Developed by Boffswana
This topic provides two high-level views of the architecture of Direct3D:
The graphics pipeline provides the horsepower to efficiently process and render Direct3D scenes to a display, taking advantage of available hardware. The following diagram shows the building blocks of the pipeline:
Diagram of the Direct3D graphics pipeline
Pipeline Component | Description | Related Topics |
---|---|---|
Vertex Data | Untransformed model vertices are stored in vertex memory buffers. | Vertex Buffers (Direct3D 9), IDirect3DVertexBuffer9 |
Primitive Data | Geometric primitives, including points, lines, triangles, and polygons, are referenced in the vertex data with index buffers. | Index Buffers (Direct3D 9), IDirect3DIndexBuffer9, Primitives, Higher-Order Primitives (Direct3D 9) |
Tessellation | The tesselator unit converts higher-order primitives, displacement maps, and mesh patches to vertex locations and stores those locations in vertex buffers. | Tessellation (Direct3D 9) |
Vertex Processing | Direct3D transformations are applied to vertices stored in the vertex buffer. | Vertex Pipeline (Direct3D 9) |
Geometry Processing | Clipping, back face culling, attribute evaluation, and rasterization are applied to the transformed vertices. | Pixel Pipeline (Direct3D 9) |
Textured Surface | Texture coordinates for Direct3D surfaces are supplied to Direct3D through the IDirect3DTexture9 interface. | Direct3D Textures (Direct3D 9), IDirect3DTexture9 |
Texture Sampler | Texture level-of-detail filtering is applied to input texture values. | Direct3D Textures (Direct3D 9) |
Pixel Processing | Pixel shader operations use geometry data to modify input vertex and texture data, yielding output pixel color values. | Pixel Pipeline (Direct3D 9) |
Pixel Rendering | Final rendering processes modify pixel color values with alpha, depth, or stencil testing, or by applying alpha blending or fog. All resulting pixel values are presented to the output display. | Pixel Pipeline (Direct3D 9) |
The following diagram shows the relationships between a Window application, Direct3D,GDI, and the hardware:
Diagram of the relationship between Direct3D and other system components
Direct3D exposes a device-independent interface to an application. Direct3D applications can exist alongsideGDI applications, and both have access to the computer's graphics hardware through the device driver for the graphics card. UnlikeGDI, Direct3D can take advantage of hardware features by creating a hal device.
A hal device provides hardware acceleration to graphics pipeline functions, based upon the feature set supported by the graphics card. Direct3D methods are provided to retrieve device display capabilities at run time. (See GetDeviceCaps and GetDeviceCaps.) If a capability is not provided by the hardware, the hal does not report it as a hardware capability.
For more information about hal and reference devices supported by Direct3D, see Device Types (Direct3D 9).
OpenGL Pipeline has a series of processing stages in order. Two graphical information, vertex-based data and pixel-based data, are processed through the pipeline, combined together then written into the frame buffer. Notice that OpenGL can send the processed data back to your application. (See the grey colour lines)
Display list is a group of OpenGL commands that have been stored (compiled) for later execution. All data, geometry (vertex) and pixel data, can be stored in a display list. It may improve performance since commands and data are cached in a display list. When OpenGL program runs on the network, you can reduce data transmission over the network by using display list. Since display lists are part of server state and reside on the server machine, the client machine needs to send commands and data only once to server's display list. (See more details in Display List.)
Each vertex and normal coordinates are transformed by GL_MODELVIEW matrix (from object coordinates to eye coordinates). Also, if lighting is enabled, the lighting calculation per vertex is performed using the transformed vertex and normal data. This lighting calculation updates new color of the vertex. (See more details in Transformation)
After vertex operation, the primitives (point, line, and polygon) are transformed once again by projection matrix then clipped by viewing volume clipping planes; from eye coordinates to clip coordinates. After that, perspective division by w occurs and viewport transform is applied in order to map 3D scene to window space coordinates. Last thing to do in Primitive Assembly is culling test if culling is enabled.
After the pixels from client's memory are unpacked(read), the data are performed scaling, bias, mapping and clamping. These operations are called Pixel Transfer Operation. The transferred data are either stored in texture memory or rasterized directly to fragments.
Texture images are loaded into texture memory to be applied onto geometric objects.
Rasterization is the conversion of both geometric and pixel data into fragment. Fragments are a rectangular array containing color, depth, line width, point size and antialiasing calculations (GL_POINT_SMOOTH, GL_LINE_SMOOTH, GL_POLYGON_SMOOTH). If shading mode is GL_FILL, then the interior pixels (area) of polygon will be filled at this stage. Each fragment corresponds to a pixel in the frame buffer.
It is the last process to convert fragments to pixels onto frame buffer. The first process in this stage is texel generation; A texture element is generated from texture memory and it is applied to the each fragment. Then fog calculations are applied. After that, there are several fragment tests follow in order; Scissor Test ⇒ Alpha Test ⇒ Stencil Test ⇒ Depth Test.
Finally, blending, dithering, logical operation and masking by bitmask are performed and actual pixel data are stored in frame buffer.
OpenGL can return most of current states and information through glGet*() and glIsEnabled() commands. Further more, you can read a rectangular area of pixel data from frame buffer using glReadPixels(), and get fully transformed vertex data using glRenderMode(GL_FEEDBACK). glCopyPixels() does not return pixel data to the specified system memory, but copy them back to the another frame buffer, for example, from front buffer to back buffer.