View Matrix at OpenGL : Why do we move the world instead of the camera

Why ?

Because, A camera represents a projection view.

But in case of 3D Camera (Virtual Camera), camera moves instead of world. I made a detailed explanation later of this post.

Understanding Mathematically

Projection View moves around space and change its orientation. The first thing to notice is that the desired projection on the screen does not change with view direction.

For this reason, we transform other things to get the desired projection.

Understanding From

To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation. OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0, 0, 0)

Understanding From

Also want to share the following lines from View matrix portion of

To simulate a camera transformation, you actually have to transform the world with the inverse of that transformation. Example: if you want to move the camera up, you have to move the world down instead.

Understanding by perspective

In the real world, we see things in a way that is called “perspective”.

Perspective refers to the concept that objects that are farther away appear to be smaller than those that are closer to you. Perspective also means that if you are sitting in the middle of a straight road, you actually see the borders of the road as two converging lines.

That’s perspective. Perspective is critical in 3D projects. Without perspective, the 3D world doesn’t look real.

While this may seem natural and obvious, it’s important to consider that when you create a 3D rendering on a computer you are attempting to simulate a 3D world on the computer screen, which is a 2D surface.

Imagine that behind the computer screen there is a real 3D scene of sorts, and you are watching it through the “glass” of your computer screen. Using perspective, your goal is to create code that renders what gets “projected” on this “glass” of your screen as if there was this real 3D world behind the screen. The only caveat is that this 3D world is not real…it’s just a mathematical simulation of a 3D world.

So, when using 3D rendering to simulate a scene in 3D and then projecting the 3D scene onto the 2D surface of your screen, the process is called perspective projection.

Begin by envisioning intuitively what you want to achieve. If an object is closer to the viewer, the object must appear to be bigger. If the object is farther away, it must appear to be smaller. Also, if an object is traveling away from the viewer, in a straight line, you want it to converge towards the center of the screen, as it moves farther off into the distance.

Translating perspective into math

As you view the illustration in following figure , imagine that an object is positioned in your 3D scene. In the 3D world, the position of the object can be described as xW, yW, zW, referring to a 3D coordinate system with the origin in the eye-point. That’s where the object is actually positioned, in the 3D scene beyond the screen.

enter image description here

As the viewer watches this object on the screen, the 3D object is “projected” to a 2D position described as xP and yP, which references the 2D coordinate system of the screen (projection plane).

To put these values into a mathematical formula, I’ll use a 3D coordinate system for world coordinates, where the x axis points to the right, y points up, and positive z points inside the screen. The 3D origin refers to the location of the viewer’s eye. So, the glass of the screen is on a plane orthogonal (at right angles) to the z-axis, at some z that I’ll call zProj.

You can calculate the projected positions xP and yP, by dividing the world positions xW, and yW, by zW, like this:

xP = K1 * xW / zW
yP = K2 * yW / zW

K1 and K2 are constants that are derived from geometrical factors such as the aspect ratio of your projection plane (your viewport) and the “field of view” of your eye, which takes into account the degree of wide-angle vision.

You can see how this transform simulates perspective. Points near the sides of the screen get pushed toward the center as the distance from the eye (zW) increases. At the same time, points closer to the center (0,0) are much less affected by the distance from the eye and remain close to the center.

This division by z is the famous “perspective divide.”

Now, consider that an object in the 3D scene is defined as a series of vertices. So, by applying this kind of transform to all vertices of geometry, you effectively ensure that the object will shrink when it’s farther away from the eye point.

In the next section, you’ll use this perspective projection formula into ActionScript that you can use in your Flash 3D projects.

Other Important Cases

  • In case of 3D Camera (Virtual Camera), camera moves instead of world.

To get a better understanding of 3D cameras, imagine you are shooting a movie. You have to set up a scene that you want to shoot and you need a camera. To get the footage, you’ll roam through the scene with your camera, shooting the objects in the scene from different angles and points of view.

The same filming process occurs with a 3D camera. You need a “virtual” camera, which can roam around the “virtual” scene that you have created.

Two popular shooting styles involve watching the world through a character’s eyes (also known as a first person camera) or pointing the camera at a character and keeping them in view (known as a third person camera).

This is the basic premise of a 3D camera: a virtual camera that you can use to roam around a 3D scene, and render the footage from a specific point of view.

Understanding world space and view space

To code this kind of behavior, you’ll render the contents of the 3D world from the camera’s point of view, not just from the world coordinate system point of view, or from some other fixed point of view.

Generally speaking, a 3D scene contains a set of 3D models. The models are defined as a set of vertices and triangles, referenced to their own coordinate system. The space in which the models are defined is called the model (or local) space.

After placing the model objects into a 3D scene, you’ll transform these models’ vertices using a “world transform” matrix. Each object has its own world matrix that defines where the object is in the world and how it is oriented.

This new reference system is called “world space” (or global space). A simple way to manage it is by associating a world transform matrix to each object.

In order to implement the behavior of a 3D camera, you’ll need to perform additional steps. You’ll reference the world—not to the world origin—but to the reference system of the 3D camera itself.

A good strategy involves treating the camera as an actual 3D object in the 3D world. Like any other 3D object, you use a “world transform” matrix to place the camera at the desired position and orientation in the 3D world. This camera world transform matrix transforms the camera object from the original, looking forward rotation (along the z-axis), to the actual world (xc, yc, zc) position, and world rotation.

Following figure shows the relationships between the World (x, y, z) coordinate system and the View (camera) (x’, y’, z’) coordinate system.

enter image description here

Unit Testing at Cocos2d-x Game Engine

Several Unit Testing framework for CPP based projects

  • CxxTest
  • Boost Test
  • UnitTest++
  • googletest
  • MsTest
  • NUnit

Cocos2d-x, a multi platform 2d Game Engine, to work with, I chose UnitTest++

Because UnitTest++ is

  • A C++ unit-testing framework designed with game development in mind.
  • Lightweight.
  • Easy to integrate, minimal work required to create a new test.
  • Covers major unit testing features
  • No dependency at monolithic project folder structure
  • Minimal footprint and minimal reliance on heavy libraries.
  • Good assert and crash handling.
  • No dynamic memory allocations done by the framework, which makes it much easier to track memory leaks and generally more attractive for embedded systems.

The driving forces behind the design of UnitTest++ are:

  • Portability. As game developers, we need to write tests for a variety of platforms, most of which are not supported by normal software packages (all the game consoles). So the ability to easily port the framework to a new platform was very important.
  • Simplicity. The simpler the framework, the easier it is to add new features or adapt it to meet new needs, especially in very limited platforms.
  • Development speed. Writing and running tests should be as fast and straightforward as possible. We’re going to be running many tests hundreds of times per day, so running the tests should be fast and the results well integrated with the workflow.


Download UnitTest++ Framework from here.


How to put all images from a game to 1 file?

Game programmers have relied on one of two main methods of data storage:

  • store each data file as a separate file
  • store each data file in a custom archive format

The drawback to the first solution is the wasted disk space problem, as well as the problem of slower installations.

The second solution provides it’s own pitfalls, first is that you must write all your own image/sound/etc. loading routines which use a custom API for accessing the archived data. A further drawback is that you have to write your own archive utility to build the archives in the first place.

Unless you will always load all files from the archive, TAR/GZ might not be a very good idea, because you cannot extract specific files as you need them. This is the reason many games use ZIP archives, which do allow you to extract individual files as required (a good example is Quake 3,​​​​ whose PK3 files are nothing but ZIP files with a different extension).

“Hide” the game folder structure and “Keep” only executables

Another solution is often used to “hide” the game files in folder structure. Keep only your executables and maybe a readme file in the main directory and move the game files into a sub folder named “data” or other related.

Gamedev Tuts Plus have a nice resource


One potential solution libarchive, which is an archiving library that will handle extracting files from an archive such as a ZIP file. It even allows you to assign the extracted file to a standard FILE pointer, which would make interfacing with any other libraries potentially more straightforward.