If you are doing a game that will use up maps, or large views that goes beyond the size of the screen you will run into issues like me. These issues will most likely be due to your lack of knowledge of world space vs node space.
Let me try to give an example. You have a character sprite, in a large map at position (60,10). Now, think you move you move the map to the right of the current screen by dragging your finger (think Angry birds). Say, you move it by 10 points. So, now your character is at (50, 10) (if you move your map to the right, the character moves to the left by 10 points). Now, if you want to detect a touch on the sprite will it work? Well, if it does then you already must’ve configured it to work with node space coordinates. So, the answer is no. It will not work.
Why did it not work? Now, if you touch the screen at (50, 10) and try to detect that on the sprite (using the bounding box, or a virtual rectangle around the sprite), you will be not get any result. This is because you are using the world space for the touch. The world space will mean the screen coordinates. So, on the screen you touch (50, 10), that’s all right. But, where is the sprite on the underlying map. We just moved the map, not the character. So, on the underlying map, the character is still at (60, 10). So, what you have to do is convert the world space coordinates to the node space coordinates of the map. Now, when you do this, the touch coordinates will get converted to (60, 10) and we will get a hit on the sprite touch detection.
The convenience methods that does this are:
convertToNodeSpace(CCTouch* touchPoint) convertToNodeSpace(CCPoint point)