April 21, 2014

[Week 12-13] Final Blog Post

Hello!
Classes have finished and exams are here, I can honestly say, as challenging as it was, this course has been one of my favorites over the last two years here. As the final exam for this course is tomorrow, this will be my final blog post, covering the most interesting things I have learned over the semester, past events such the UOIT GameCon and LevelUp Showcase, as well as my personal exam prep.
Over the last four months, there has been so much I have learned, and theirs a few thing that really stood out to me. As simple as it is, normal mapping/displacement mapping really interested me, creating high quality looking models from low-poly meshes with mapping, as well as the use of Ambient occlusion, creating non-directional lighting throughout the scene, and the implementation of Shadow mapping, testing of whether a pixel is visible from a light source. After learning how these three processes work, I have personally noticed it more frequently in video games I play.
Batman Mesh -  Source
A few weeks ago, two game dev. events went on, the LevelUp Showcase on April 4th, downtown Toronto, as well as the GameCon here at UOIT on April 7th. At the LevelUp Showcase, it was consisted of student created projects only, from 16 different colleges/universities. I attended to support my fellow classmates that we're showcasing their games, as well as to check out what the students from other school were creating. During the GameCon event, all studios from all years of game dev. had the chance to present their game out for other to play. My studio, Robopocalypse, presented our game Robopocalypse; a 2.5d face paced fighting game. We got lots of feedback, and it seemed that most people enjoyed our game. There were many great projects that were seen this year, and I am definitely looking forward to what will be created for next year.
Robopocalypse
As for the exam tomorrow, I have been studying for the past week, going over concepts that I was having trouble with, and I now feel ready to write the exam. A topic that I had to study more on was convolution kernel filtering; a topic that I lost many marks on for the midterm. After going through the NVidia’s GPU Gems section on this topic, and discussing it with friends, I now believe I understand it enough to be tested on.
Convolution Kernel Filter
Thank you for reading my blog!
- Jonathan

March 30, 2014

[Week 9-11] End of Semester Update

Hello!

With the school year is coming to a close, the last few weeks have been fairly busy.
I have not been able to blog as much as I would like to, so today's blog post is a summary of what I have been doing the past couple weeks up to today (weeks 9-11).

In the recent lectures periods, we have been doing in class exercises, such as creating the portal effect and creating water effects, which brings together the different concepts we have been learning about this semester. We also had the chance to demo our GDW game to the class for criticism on things to improve on in preparation for GameCon/LevelUp.
In relation to the water exercise, we watched a GDC 12 presentation on the Water Technology of Uncharted from Naughty Dog. In their presentation, the method of how they did their water rendering was really interesting. From what I understood, they were using different geometry sizes of clipmaps on top of each other, and blended in between the different levels.
It worked together with the camera, where the important objects in the scene with detail were in the smallest mesh, with the camera focused on it. As the objects/camera moved in the scene, the rings would also move.

Water Technology of Uncharted
The other topics that we have covered in the past lectures were deferred rendering, depth of field and motion blur, all three topics that were interesting to me.

In the recent tutorial with Dan, we went over the implementation of shadow mapping; an explanation of what is done, and the shader code process. We also went over how to implement projection; projecting an image from the light source onto the scene. I plan to use what we were shown to complete homework assignments due in the few weeks.

Shadow Mapping + Projection
As for homework, there are two weeks remaining for homework submission. My framework still has a few small issues with UV texture loading, but I can finally start finishing off homework questions.
So far I have started and almost completed the ambient occlusion demo, the shadow mapping demo, as well as an artist questions demoing the use of normal/specular/diffuse maps in different environments such as Maya, the opengl shader designer, and into my framework. I may work on mesh skinning or reflections, if I can finish it on time.

Ambient Occlusion In Maya
- Jonathan

March 14, 2014

[Week 8] VR Game Jam

Hello!
Last weekend, I took part in GDSoc second Game Jam, with the theme of VR, requiring that we use at least 1 VR tool available to us.
I was in a team of three, we had a slow start, deciding on our idea and how we were going to complete it; we finally chose to recreate a simple labyrinth, navigating a ball throughout the maze, with the use of the Leap Motion Controller.

Main Menu + InGame Screenshot

Our team is fairly new to the Unity environment and scripting, so it took some time to learn. We did not have enough time to implement the leap motion unfortunately but we were able to complete the basic game without the controller for the last day.
My team did learn a lot, and I am personally building onto the project, as practice in Unity for future game jams.

- Jonathan

March 3, 2014

[Week 6 & 7] Shadow Mapping + Reading Week

Shadow Mapping!
In week 6 we began to cover the topic of shadow mapping.
Shadows in games play an important role, adding realism, character/mood, as well as helps the viewer to understand spatial relationships.
There are different techniques to creating shadows, as well as many variations on each technique. The main techniques include ray tracing, shadow maps (commonly used by Pixar), and shadow volumes (used by many current games).

Shadow mapping which I will be explaining today is a relatively easy to implement technique. Shadow mapping is an image-space technique. There are multiple pros and cons to using it. Some pros of using shadow mapping include its speed, and that it scales linearly, as well as the technique is optimized for GPU. When using a shadow map, you only need a depth buffer, and it also only requires general shapes (triangles, higher-order surfaces). A major con to using shadow mapping is the aliasing problem.

When creating shadow maps, there are two major viewpoints to consider; the lights viewpoint and how it interacts with the blocker (casts shadows) and receiver (surface shadow falls on), as well as the observers viewpoint, which is what the user can see.

Shadow Map Diagram
Projective Texture Mapping is another important concept which also applies to shadow maps. Shadow maps are textures, and they need to perform look-ups. The projective texture coordinates will first need to be computed, then transformed into lights space, then finally projected to screen space.

As stated previously, aliasing is currently an issue with shadow mapping; it can sometimes show up as jagged shadow edges, and as incorrect self-shadowing artifacts (visual artifacts). Shadow maps also have a finite resolution, and could also sometimes result in a resolution mismatch.
Currently a way that could fix this problem is by increasing the resolution of the shadow map; but this only reduces aliasing (not remove) as well as uses up extra GPU memory.

Reading Week + Tutorials!
Over the reading week for most of it I studied mainly for midterms, slept a lot, also some gaming when I had time :)
I didn't have my framework finished for the previous Friday as planned, continued to work on it over the reading week. I completed the artist Character Design homework question since I do enjoy modelling in Maya/Mudbox.
In Dan's tutorials, we went over non-photo-realistic rendering (toon shading), which did help my understanding for what I will need to do for the homework questions.

Next week’s blog I plan to cover more image processing.


- Jonathan S.

February 15, 2014

[Week 4&5] Full Screen Effects, Global Illumination and Colour Processing

Hello!

This blog is covering two weeks ago (week 4) as well as last weeks (week 5) lectures/tutorials. I will cover this weeks (week 6) in my next blog post.
In the past two weeks we were lectured on Full Screen Effects, Global Illumination, as well as shown how to implement Lighting and Colour Processing in the tutorial.

Full Screen Effects(Post Processing)!
This lecture we had was really interesting, learning about how the different post processing effect are created. We were taught on the basic imaging processing effects, such as blurring, HDR/bloom, depth of field and more.
We first went over the different methods of blurring, such as using Box Filtering and Gaussian Bluring. Gaussian Bluring stood out to me because I have used it in Photoshop in the past, and it was interesting learning on how the effect is created.
Creating the HDR and Bloom effect was also interesting. It was shown that it can be done by combining multiple exposures into a single image and blending and using a tone map with it.
An example that we were shown in class, was of batman poster. After using the down sampling effect which gives it a blur, as well as an BrightPass Filter which extracts the highlights, it came out with a really cool looking effect.
Left: Original      Right: Down Sampling + BrightPass Filter

Global Illumination!
In class we also covered Global Illumination and how it differs from direct lighting. We were shown previously the math behind direct lighting ...
... as well as how it is used. This equation is used for calculating direct lighting, which equates to the total amount of light of wavelength λ directed outwards along the direction of w at time t from a specific position x.
Then continuing onto Global Illumination, we learned about radiosity, explained how it is both a unit of light and an algorithm. When using radiosity, it is a way to do indirect illumination/diffuse reflections, as opposed to direct lighting, which uses ray tracing, which also makes use of specular reflections. Using radiosity, it can create effects that are seen in current games, such as Mirrors Edge.
Screenshot from Mirrors Edge

Tutorials!
In weeks 4 and 5 tutorials, we covered different types of lighting, such as Lambert and Phong, as well covering different colour processing, such as rgb/hsl invert, selective colour, grey scale, sepia tone, as well as Dans "psycho pearl" which he created.
Two Shader Examples

Homework!
As for homework, I am still working on my personal standalone framework for the homework question, and I am planning on having question 2 and possibly another finished for next tutorial.
Next week is reading week, so I will have more time to catch up on readings, as well as I will be covering Shadow Mapping and more on cel shading in my next blog.

- Jonathan S.

January 26, 2014

[Week 3] Lighting and FBO

Hello!

So week 3, things are starting to get really interesting. In the lectures we started going over the importance of lighting, and the major differences it can make in scenes. We were shown some examples of how by just using lighting alone, it could add direction for the player, as well change the mood of an scene.

We learnt that in OpenGL there are 4 different components of light; emission, ambient, diffuse, and specular.
Example of Specular, Diffuse, and Ambient
Example of Emission
Emission: Light that emits/originates specifically from an object, such as a lamp.
Ambient: A non directional light source; light that doesn't have an origin. All vertices in the scene are affected by this light component.When a light ray hits a surface, it is scattered equally in call directions.
Diffuse: Light ray  that comes from a light source and hits directly on different parts of an surface, changes depend on the vertex positions, and its orientation in relation to the light source.
Specular: Specular is the light component that creates highlights on a surface like mirror and metals, and changes in brightness depending on where it is viewed from. This component is usually used with materials.

Toon/Cel Shading was also covered covered in class which I was really looking forward to previously mentioned. From what I understood from cel shading, it still does make use of the 4 components of the light, but unlike how diffuse and specular lighting normally uses a continuous function, it uses a step function for lighting, giving it a more 'blocky" feeling. It was also explained that there are many ways to achieve this effect, some more efficient/less costly then others, but all generally the same concept.

During the tutorials with Dan, we went over FBOs, and how they could be used. In previous semesters we have been only using the front and back buffers, where with FBOs, it does external buffering, we can buffer more complex things off-screen. We also went over how a FBO can be used with the main buffers, by binding the texture from the FBO to the back buffer. A benefit of using an FBO is that it not affected by depth, which means that you can apply multiple textures to it and it will work. Another use for FBOs is that all post processing effects can and are done through it. During the lecture, Dan made the analogy that the FBO can be compared to an powerbar, where it can hold multiple textures, which in out case we can only use 16.
FrameBuffer Object
This week I have begun to start deciding on which homework questions I am planning to do, as well as prepping the framework that I am planning to use.

- Jonathan S.




January 18, 2014

[Week 1&2] Stay Awhile and Listen...

Hello!
This is a development blog for Intermediate Computer Graphics, specifically for the development of my studios game Robopocalypse. This blog will also cover information taught in lectures/tutorials, which I can use at a later date.

I'm pretty excited for the topics that will be taught on, specifically shaders. For our game, as well as future projects, I'm interested in learning/implementing different shader that can be used to create effects such as bloom, HDR, cel shading as well as other effects such as bump/normal/displacement mapping.

One of my favorite examples of shader use is the cel shading from borderlands 2. I am really looking forward to learning more on how this effect is created.

Borderlands 2

In the past few lectures, we have gone over a review from previous classes, mainly reviewing the graphics pipeline, as well as an introduction to shaders. One topic that stood out to me was the use of normal and displacement mapping, and its use.

Base vs. Bump vs. Displacement Mapping

In the past two tutorials with Dan, we have gone over the basic set up of a framework we will be using for future tutorials. So far we have done a quick shader test, which was a pass-through that made the screen orange.

I am really looking forward to what will be taught semester, and to hopefully be able to implement them to give my studios, as well as future projects to give them a more professional look.

- Jonathan S.