- 2013.02.25 | 카메라 캘리브레이션
- 2013.01.12 | IMU
- 2012.01.11 | Unsupported compiler 'GCC 4.2' selected for architecture 'armv7' 에러 발생시
- 2011.09.28 | WebGL의 미래.
- 2011.08.03 | Apple iPad 2, GPU Performance
- 2011.06.05 | iPhone4 AR with Gyroscope
- 2011.05.31 | AR in Web - by Unity Engine
- 2010.07.13 | Direct3D Graphics Pipeline
- 2010.07.13 | OpenGL Rendering Pipeline
- 2010.05.12 | Red-Black Tree Animation
안드로이드로 중력가속도 계산하기
As you probably remember from you physics class, position, velocity and acceleration are related to eachother: deriving the position, gives us velocity:
d x = v x
with x being the position on the x-axis and v x being the velocity along the x-axis.
Maybe less obvious, the same holds for angles. While velocity is the speed at which the position is changing, angular rate is nothing more than the speed the angle is changing. That’s right:
d alpha = angular rate = gyroscope output
with alpha being the angle. It’s starting to look pretty good! Knowing that the inverse of deriving (d .) is integrating (∫), we change our formula’s into:
∫ angular rate = ∫ gyroscope output = alpha
Woohoo, we found a relation between angle (attitude!) and our gyroscope’s output: integrating the gyroscope, gives us our attitude-angle.
칼만 필터 자료 모음
About Kalman Filter
In the tutorial on gyroscopes, we saw that the bias drifts. Well, here comes the kalman-magic: the filter will adjust the bias in each iteration by comparing the result with the accelerometer’s output (our second input)! Great!
kalman filter source code
Unsupported compiler 'GCC 4.2' selected for architecture 'armv7' 에러 발생시
Unsupported compiler 'GCC 4.2' selected for architecture 'armv7'
라는 에러가 발생했다. 해결방법은 다음과 같다.
해당 Project - Build Setting 에서 Build Options의 Compiler for C/C++/Objective-C 부분의 Compiler를
GCC 4.2 이외의 컴파일러로 선택해주면 된다.
중간에 나오는 그래픽에서 마우스를 좌우로 움직일 수 있다.
＠choijaekyu 님의 트윗에서,
자바로 3D 프로그래밍을 하지 못하면 도퇴되는 시기가 올 것 같다라는 글을 봤는데,
왜 그런지 이제 알 것 같다.
지금이라도 시작하지 않으면 정말 늦어버릴 것 같다.
스티브 잡스가 말한 "Software in the beautiful box"가 모든 하드웨어에 적용되는 것이 머지 않았다.
나는 어떻게 현명히 대처할 수 있을 것인가.
Apple iPad 2, GPU Performance
|Apple iPad vs. iPad 2|
|Apple iPad (PowerVR SGX 535)||Apple iPad 2 (PowerVR SGX 543MP2)|
|Array test - uniform array access|
|Branching test - balanced|
|Branching test - fragment weighted|
|Branching test - vertex weighted|
|Common test - balanced|
|Common test - fragment weighted|
|Common test - vertex weighted|
|Geometric test - balanced|
|Geometric test - fragment weighted|
|Geometric test - vertex weighted|
|Exponential test - balanced|
|Exponential test - fragment weighted|
|Exponential test - vertex weighted|
|Fill test - texture fetch|
|For loop test - balanced|
|For loop test - fragment weighted|
|For loop test - vertex weighted|
|Triangle test - textured|
|Triangle test - textured, fragment lit|
|Triangle test - textured, vertex lit|
|Triangle test - white|
|Trigonometric test - balanced|
|Trigonometric test - fragment weighted|
|Trigonometric test - vertex weighted|
Apple iPad2 Acceleration Sensor, Gyroscope
iPhone4 AR with Gyroscope
같은 어플을 iPhone 3GS에서 Digital Compass만 이용했을 때와,
iPhone4에서 Gyro를 이용했을 때의 차이를 극명히 보여주는 영상이다.
<참고 : http://brucemoon.net/1198141647>
Gyro 센서와 가속도 센서와의 가장 큰 차이점은 이전 가속도 센서는 중력과 관련된 움직임만을 받았기
때문에 지면과 수평한 회전은 인식이 불가능했다는 것이고, 자이로는 이것을 인식가능하다는 점이다.
<네이버 맥부기 사이트 '가속도계 센서랑 자이로 센서랑 틀린건가요?'라는 글의128bit님의 답변 참조>
AR in Web - by Unity Engine
Direct3D Graphics Pipeline
출처 : http://msdn.microsoft.com/en-us/library/bb219679(v=VS.85).aspx#Direct3D_Graphics_Pipeline
This topic provides two high-level views of the architecture of Direct3D:
- Direct3D Graphics Pipeline - A view of the internal processing architecture of the Direct3D rendering system.
- Direct3D System Integration - A view of how Direct3D mediates between an application and the graphics hardware.
The graphics pipeline provides the horsepower to efficiently process and render Direct3D scenes to a display, taking advantage of available hardware. The following diagram shows the building blocks of the pipeline:
Diagram of the Direct3D graphics pipeline
|Pipeline Component||Description||Related Topics|
|Vertex Data||Untransformed model vertices are stored in vertex memory buffers.||Vertex Buffers (Direct3D 9), IDirect3DVertexBuffer9|
|Primitive Data||Geometric primitives, including points, lines, triangles, and polygons, are referenced in the vertex data with index buffers.||Index Buffers (Direct3D 9), IDirect3DIndexBuffer9, Primitives, Higher-Order Primitives (Direct3D 9)|
|Tessellation||The tesselator unit converts higher-order primitives, displacement maps, and mesh patches to vertex locations and stores those locations in vertex buffers.||Tessellation (Direct3D 9)|
|Vertex Processing||Direct3D transformations are applied to vertices stored in the vertex buffer.||Vertex Pipeline (Direct3D 9)|
|Geometry Processing||Clipping, back face culling, attribute evaluation, and rasterization are applied to the transformed vertices.||Pixel Pipeline (Direct3D 9)|
|Textured Surface||Texture coordinates for Direct3D surfaces are supplied to Direct3D through the IDirect3DTexture9 interface.||Direct3D Textures (Direct3D 9), IDirect3DTexture9|
|Texture Sampler||Texture level-of-detail filtering is applied to input texture values.||Direct3D Textures (Direct3D 9)|
|Pixel Processing||Pixel shader operations use geometry data to modify input vertex and texture data, yielding output pixel color values.||Pixel Pipeline (Direct3D 9)|
|Pixel Rendering||Final rendering processes modify pixel color values with alpha, depth, or stencil testing, or by applying alpha blending or fog. All resulting pixel values are presented to the output display.||Pixel Pipeline (Direct3D 9)|
The following diagram shows the relationships between a Window application, Direct3D,GDI, and the hardware:
Diagram of the relationship between Direct3D and other system components
Direct3D exposes a device-independent interface to an application. Direct3D applications can exist alongsideGDI applications, and both have access to the computer's graphics hardware through the device driver for the graphics card. UnlikeGDI, Direct3D can take advantage of hardware features by creating a hal device.
A hal device provides hardware acceleration to graphics pipeline functions, based upon the feature set supported by the graphics card. Direct3D methods are provided to retrieve device display capabilities at run time. (See GetDeviceCaps and GetDeviceCaps.) If a capability is not provided by the hardware, the hal does not report it as a hardware capability.
For more information about hal and reference devices supported by Direct3D, see Device Types (Direct3D 9).
OpenGL Rendering Pipeline
출처 : http://www.songho.ca/opengl/gl_pipeline.html
OpenGL Rendering Pipeline
OpenGL Pipeline has a series of processing stages in order. Two graphical information, vertex-based data and pixel-based data, are processed through the pipeline, combined together then written into the frame buffer. Notice that OpenGL can send the processed data back to your application. (See the grey colour lines)
Display list is a group of OpenGL commands that have been stored (compiled) for later execution. All data, geometry (vertex) and pixel data, can be stored in a display list. It may improve performance since commands and data are cached in a display list. When OpenGL program runs on the network, you can reduce data transmission over the network by using display list. Since display lists are part of server state and reside on the server machine, the client machine needs to send commands and data only once to server's display list. (See more details in Display List.)
Each vertex and normal coordinates are transformed by GL_MODELVIEW matrix (from object coordinates to eye coordinates). Also, if lighting is enabled, the lighting calculation per vertex is performed using the transformed vertex and normal data. This lighting calculation updates new color of the vertex. (See more details in Transformation)
After vertex operation, the primitives (point, line, and polygon) are transformed once again by projection matrix then clipped by viewing volume clipping planes; from eye coordinates to clip coordinates. After that, perspective division by w occurs and viewport transform is applied in order to map 3D scene to window space coordinates. Last thing to do in Primitive Assembly is culling test if culling is enabled.
Pixel Transfer Operation
After the pixels from client's memory are unpacked(read), the data are performed scaling, bias, mapping and clamping. These operations are called Pixel Transfer Operation. The transferred data are either stored in texture memory or rasterized directly to fragments.
Texture images are loaded into texture memory to be applied onto geometric objects.
Rasterization is the conversion of both geometric and pixel data into fragment. Fragments are a rectangular array containing color, depth, line width, point size and antialiasing calculations (GL_POINT_SMOOTH, GL_LINE_SMOOTH, GL_POLYGON_SMOOTH). If shading mode is GL_FILL, then the interior pixels (area) of polygon will be filled at this stage. Each fragment corresponds to a pixel in the frame buffer.
It is the last process to convert fragments to pixels onto frame buffer. The first process in this stage is texel generation; A texture element is generated from texture memory and it is applied to the each fragment. Then fog calculations are applied. After that, there are several fragment tests follow in order; Scissor Test ⇒ Alpha Test ⇒ Stencil Test ⇒ Depth Test.
Finally, blending, dithering, logical operation and masking by bitmask are performed and actual pixel data are stored in frame buffer.
OpenGL can return most of current states and information through glGet*() and glIsEnabled() commands. Further more, you can read a rectangular area of pixel data from frame buffer using glReadPixels(), and get fully transformed vertex data using glRenderMode(GL_FEEDBACK). glCopyPixels() does not return pixel data to the specified system memory, but copy them back to the another frame buffer, for example, from front buffer to back buffer.