How to improve your shadows – Understanding the light projection matrix

When using shadow mapping, the resolution of your shadow buffer is lower than your default color buffer resulting in a low resolution shadow. But, this effect can be somewhat mitigated by programatically computing your light projection matrix so it covers the minimal volume possible.

In the above images, the buffer of the shadow resolution is the same. But, the bounds use for the computation of the light matrix (light_projection_matrix * light_view_matrix) differs. For the high res shadows on the right, the light projection matrix is controlled by the bounds of the view frustrum. In terms of a directional light, this will be an orthographic matrix, and the bounds of the light be the bounds of the viewing frustrum (which can be perspective or orthographic). This makes the resolution of the shadows far more high-res as only a smaller portion of your scene is considered. So, if you choose a reasonable resolution and use a bounds-aware light projection matrix you get good use resolution of your shadows. This can vary for level of detail that you shoot for based on the zoom of your camera.

Binding to framebuffer 0 may cause a blank screen


Today, I encountered an interesting issue that took away a few hours of my (not so) precious life, to debug and understand. I was implementing some basic shadow mapping, which requires you to create a new render target (meaning you need to render to a separate buffer other than the screen). So, we have to switch back and forth between framebuffers. Most OpenGL tutorials out there will simply ask you to bind back to the default framebuffer 0.

Here’s a code excerpt from learnopengl.com site (as of 11/16/2018) from their article on shadow mapping (I love this site, and this in no way a criticism of their content, just using it to point to a probably bug). https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping

// 1. first render to depth map
glViewport(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT);
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);
glClear(GL_DEPTH_BUFFER_BIT);
ConfigureShaderAndMatrices();
RenderScene();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 2. then render scene as normal with shadow mapping (using depth map)
glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
ConfigureShaderAndMatrices();
glBindTexture(GL_TEXTURE_2D, depthMap);
RenderScene();

Binding back to framebuffer 0, as mentioned here, simply output a blank screen for me. And I went through a period of commenting out all my rendering code and enabling line by line (costing me half a day or more) to see what was going wrong. I found the culprit in the line: glBindFramebuffer(GL_FRAMEBUFFER, 0);. Then I suddenly got an idea and executed these lines to find my actual default framebuffer (after disabling any shadow mapping code and simply doing a simple setup for single pass rendering):


glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, &default_draw_fbo_);
glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &default_read_fbo_);

And to my surprise, the answer was 3. With allocating a new framebuffer for shadow mapping, this went up to 4 (weird!). I’m using Qt 5.11 as my application framework, and I’m not sure whether this is a bug/feature or what (maybe they use framebuffer 0 to render their own stuff). But, it seems that the default framebuffer cannot be assumed to be 0.

So, if you’re experiencing a blank screen when trying to a render anything that causes you to switch between framebuffers, make sure you find out what exactly you’re default framebuffer ID is. Then, just switch back to this known number, and all will be well.


// also GL_FRAMEBUFFER is deprecated now, simply use
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, default_draw_fbo_);

 

HTH.

The feedback cycle and runtime governance

Introduction

Runtime governance can be defined as the process which allows you to control and manage parameters in your runtime execution environment. A runtime execution environment can vary from a single web server that hosts a simple web page, to gigantic deployments that can span to 1000+ servers. This means the complexity of implementing runtime governance can depend heavily on how complex the runtime environment actually is. A feedback cycle allows to continuously get feedback from the runtime system to govern it more effectively. This article briefly explains how a feedback cycle is important to the runtime governance process.

The feedback cycle

The feedback cycle defines a model that is common to any runtime execution environment. The four stages can be related to any environment regardless of its size.

feedback_cycle-general

Each stage is elaborated on below:

1. Gather data

The gathering of data is the starting point of the feedback cycle. Data can be distributed among many points in a runtime environment. Let’s consider a deployment consisting of a web server. If it is a clustered deployments, all web servers can be potential data collection points. The other option is, if there is a load balancer in front of the cluster, to use the LB as the data collection point. But, this might impact the performance of the load balancer. So, the performance cost needs to be compensated for in terms of additional LBs, depending on the requests per second is affected.

LB web servers

The second option to ponder is what type of data to collect. Typically, the more data you collect the better. This might vary from CPU cycles consumed by the servers to the HTTP headers of all requests. All types of data can be used to generate some sort of useful information.

2. Slice and Dice

After gathering data, the second part of the cycle is to generate useful information through slicing and dicing the data. Real time analysis maybe needed to prevent imminent security threats. For example, a 30 second window maybe enough to send enough requests from multiple IPs to overwhelm a medium sized website.  Batch-based analytics maybe needed for trend analysis over timespans. A combination of batch based and batch based analytics seems to be the most viable option for a variety of requirements of generating useful information quickly and over a large time period. There are various tools in the landscapes of complex event processing and data analytics that allow to rapidly perform analytics to produce useful information.

3. Evaluate Information

The third step of the cycle deals with the fact that each piece of information can present some vital insight about your runtime execution. You notice that website visits are doubling each month. At this point it should be evaluated whether this is just a temporary trend or is this the effect of any recent improvements. Also, you may notice a trend of some increasing downtime among your servers. Maybe, this is related to some sort of attack taking place or some unreliable hardware. Usually,  domain knowledge and solution architecture expertise needs to be heavily utilized to make these insights as these may lead to heavy resource investments.

4. Adjust parameters

The final step of the cycle bridges the feedback cycle to runtime governance. Tuning your parameters to effectively govern your environment is done in this step. This may mean you need more server capacity or some additional steps to boost security and strengthen the site’s resilience based on this observation. Policies can be altered, introduced or decommissioned based on the information given by the feedback cycle.

Conclusion

Based on information gathered from the feedback cycle, the effectiveness of the runtime environment can be judged and altered. It even allows you to get an idea about missing components that are needed for more effective runtime governance. Feedback cycles can also be stretched beyond typical runtime governance applications to understand various trends about server uptime, API analytics and business activity monitoring to get more insight about a business and associated trends.

Migrate code styles to Intellij Idea 11 on a Mac

If you are migrating between Idea versions, you may want to migrate your code styles as well.

Let’s assume you want to migrate a code style called foo-codestyle.xml. In Idea 9.x versions this is present at ~/.IntelliJIdea90/config/codestyles/foo-codestyle.xml

Now, open up Idea 11, and go to Settings->Code Style, click on Manage and create a code style named foo-codestyle. Now close Idea, and copy the earlier file to overwrite ~/Preferences/IntelliJIdea11/codestyles/foo_style.xml. Notice the underscore (_) in the new file name, instead of a hyphen (-).

Restart Idea and you should see the orginal code style settings under foo-codestyle.

Why write unit tests?

It was not until after a few years of being a dev that I understood why you need good unit tests. Unit tests are usually a pain, or so I thought. Why do you need to test the code, that you have already verified as working??

The problem comes when it’s maintenance time. And all code goes through maintenance either by you or someone else. Unit tests are a superhero when it comes to making sure any change does not break functionality. I understood this the hard way, I hope you don’t have to.

Here are some more advantages, that I personally like about unit testing.

  • You don’t have to build other components to figure out basic functionality has broken.
  • The code naturally improves using proper interfaces to accomodate unit tests.
  • If basic functionality is broken you know immediately.
  • You do not need to write features, copy/paste jars or dlls to know whether your code works properly.

If your a Java dev, here’s a great 60 second tutorial to start you off in JUnit 4.

Running FindBugs/Code Inspections = Saving countless dev hours

I wanted to write this post after learning so much from running FindBugs and Idea code inspections on my code over a period of time.

There was once a simple code I wrote after seeing some performance gaps, to introduce some local caching. So I used a simple HashMap implementation, which really couldn’t go wrong. Or so I thought. We put this up in the running system after some quick initial tests, and then it started to give all sorts of problems. A culprit was a piece of code like this:

private Resource getSubscriptionResource(Registry registry, String endpointUUIDPath) throws RegistryException {
 Resource subscriptionEndpoint = null;
 if (registryResourceCache.containsKey(endpointUUIDPath)) {
 subscriptionEndpoint = registryResourceCache.get(subscriptionEndpoint);
 } else {
 subscriptionEndpoint = registry.get(endpointUUIDPath);
 registryResourceCache.put(endpointUUIDPath, subscriptionEndpoint);
 }
 return subscriptionEndpoint;
 }

After this piece of code was running in the system, it started to give all sorts of errors (obviously) and I had to start by reverting the patch, going through the code to figuring out the problem, involving some proper QA to verify the functionality, and finally, applying the patch to the running system. This cost us maybe about 4 to 5 hours of dev/QA time. Of course, the blunder is obvious now, but not at that time.

If I took 5 minutes and ran Idea code inspections on my class over my code, it would show me this:

This simple inspection would have saved me so much of my time and my colleagues’ time as well.

FindBugs is a tool which can analyze your Java code and show you probable bugs, bad practices, performance bottlenecks and anti-patterns that looks perfectly fine to the naked eye. Many of us feel uneasy when fixing these reported issues as they seem unnecessary and tend to mess up our code. But those very little issues come back to you as  nasty bugs that bite you right where it hurts later on. One of the lessons I learnt from running FindBugs on my code is that double checked locking can introduce some really weird behavior in multi-core/ multi-threaded systems.

Here is what I saw:

It even provided me a great link, that expanded my knowledge about this subject (http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html). Now I know how to use it properly, and also to avoid using it if at all.

I learnt these lessons the hard way. You don’t have to. Use FindBugs or Idea inspections or any other code analysis tool before you actually commit your code. It will surely save tons of your time, while give you an amazing knowledge of better coding practices.