Sunday, November 18, 2012

What I know about Computer Graphics

I've been working closely with CG both professionally and as a hobby for the past 5-6 years. I've been making games and developing engine architectures. Latest developments can be tracked on Claymore Dev blog. I've seen different techniques, tried many others, even written articles about them in big books (GpuPro-3, OpenGL Insights). And the funny point is: I still don't know how to build engines... All I know is how bunch of known techniques may help or screw you up, based on personal experience.

Uber-shaders
Problems: Difficult to maintain due to bad granularity and monolithic approach. Unable to extend from the game side.
Alternative: Shader compositing. In OpenGL you can extend the functionality by either linking with different shader objects, swapping the subroutines, or by directly modifying the source code.

Deferred shading
Problems: Very limited BRDF support. High fill-rate and GPU memory bandwidth load. Difficult to properly support MSAA.
Alternative: Tiled lighting. You can work around the DX11 hardware requirements by separating lights into layers (to be described).

Matrices
Problems: Difficult to decompose into position/rotation/scale. Take at least 3 vectors to pass to GPU. Obligation to support non-uniform scale (e.g. no longer skipping invert-transpose on a 3x3 matrix to get a normal matrix).
Alternative: Quaternions and dual-quaternions. Both take 2 vectors to pass.

Context states
Problems: Bug-hunting is difficult because of bad problem locality. Assumptions over the context are easy to make, but if you decide to check it with assertions, why not just pass the whole state instead?
Alternative: Provide the whole state with each draw call. Let the caching work for you.

C++
Problems: Memory management and safety. Compiler-generated copy operators/constructors. Pain dealing with headers and optimizing the compile time. Many many lines of code.
Alternative: Rust. Other "safe" languages (.Net family, Java, Python) are not as low-level and often trade performance for safety (i.e. global GC phase causes an unacceptable frame rate interruption).

All I actually know is that there is a thousand and one difficult architectural issues of the graphics engine, and there is no silver bullet to most of them. For the most common solutions I listed possible alternatives, but they are no near being flawless. I hope that one day the amount of experience I get will magically transfer into the quality of my decisions, and I will finally know the right answers.

Thursday, November 1, 2012

Rust

Today's early morning I woke up with a single thought reflecting loudly in my brain: "Dart was a mistake, it was not made for me. I should look for some statically typed practical language instead". Even though my KriWeb project (written in Dart) was not actively developed, I agreed (with my dreaming counterpart) that the instrument I choose for this project iteration is far from perfect. Suddenly, I felt the urge to look for something ideal, something that seemed so real as if I was reading its specification the other day... And I just needed to recall its name...

I started looking for it on the web. There were many interesting suspects among new languages. Ceylon, for example, features immutability by default (which highly encourages functional style), which seemed very familiar and close to what I looked for. It is a very nice language all in all, but it's currently running on Java VM, and was heavily inspired by it, what pushed me off. Go sounded attractive due to the strong support from Google, however disappointed me by its lack of user generics. Zimbu looked too original, while Haxe seemed to pretend covering too much use-cases. I've reached  the 5-th page in google search results, and there still wasn't any trace of it. Maybe it was a dream?..

One step away from stopping my search, I stumbled upon this Holy Grail of programming. Name is Rust, developed by Mozilla Foundation. Suddenly, I remembered this shining website interface, this clear language specification that I read a while ago. I found it, at last! Let me explain why I was so happy:
  • Strong static typing with inference, only explicit mutability. This is so right and so rare to see at the same time. Unlike Dart, most of my mistakes will be found at compile time.
  • No page faults while still compiling to native code. Memory model is protected and guaranteed to work without access violations under normal circumstances. It has a potential for C-like performance, hence being a better tool for various tasks.
  • User generics with constraints, pattern matching (Haskell-style). Yes, it took the best from my beloved purely functional language.
  • Less statements but more expressions and closures. This makes it even more sleek and functional.
  • Syntax extensions. Hello, Boo macros!
  • Structure compatibility with C. Using external API's (i.e. OpenGL) gets easier.
Overall, the language and its environment seem very nice. It is simple yet powerful, and feels very promising. I'm looking forward to work closely with this gem, and I'm very excited :)

Thursday, October 4, 2012

Mind shield

Travelling helps to get a clear view over the strange world we live in. Now I understand what Hollywood is, along with many other parties, as the idea hit me like a truck. And my position towards big names will never be the same again.

The ultimate goal of each corporation as well many organizations is to get your wallet. However, selling things and services directly may not be the best way to achieve it. Instead, they strike for your heart because it opens the doors to much more than just money. I'm talking about Hollywood, church, Apple, Google, governments and charity. They want you to like them, to consider them for your choice, to talk about them with your friends, to think about them at night. They want to be a part of your mind.

Humans are weird creatures. They consider themselves to be smart, but they barely understand how the environment shapes their minds. An average human does not even try to control the development of its brain, of its consciousness. And the brain just absorbs stuff chaotically, whatever happens to get in.

The story of Hollywood is simple. Long ago there was a cinema. Actors were servants, they didn't earn much, they were not recognized by random people, they were nothing at that point. And then a smart man came in and decided to change the role of the actors in the movie of life. From now on, he said, actors will be respected and well paid. He made people like them, love them and buy them. Clearly, he understood the "get into your heart" business model. As we see now, it turned out to be hugely successful. We have favourite actors, we know a lot of movies, and we pay insane amount of money to watch them at theatres. Movie business is blossoming, and every child now wants to be an actor in the future...

Let's figure out how we could protect our hearts. First off, don't watch too much of the TV. Try to evade any kind of advertising on TV, radio and on the Internet. Allocate a small part of your brain to be independent, to interrupt the flow of thoughts periodically with a simple question: "Do you really need to watch/listen/think about that?". Finally, pay more attention to stuff that really matters to you: science, art, family, philosophy, health, etc.

Wednesday, September 5, 2012

First DirectX impressions


During the last half a year I had a great opportunity to work closely with DirectX on a production scale. From the very beginning, I had a suspicion that the technology is a big joke. It didn't start with little things, no, it started with a full-scale attack on my OpenGL-friendly brain. Let me name the offenders:

Render state. You are not in a full control of it. DX runtime may change it without your concern. In particular, this happens under DX10+ when you bind a texture resource that is also one of the render targets. Debug runtime will notify you in the log but the regular one will just do it silently. And then you wonder, where did the texture go? Unsurprisingly, PIX, the hammer of DX frame profiling, is not able to handle this behavior correctly, so debugging one of these little bugs may cost you a really long headache. In DX11 we can see a new flag that allows binding a depth-stencil texture as read-only, allowing to sample from it. As a result, you have to split the rendering paths: copy the depth for DX10, and use the flag for DX11.

In contrast, OpenGL gives you an undefined behaviour whenever you want to read and write at the same time. While it seems suspicious at first, in practice you don't sample from the texture being rendered to, so your program works as expected, and no state is corrupted. Moreover, your GL program doesn't need a new flag, or a depth texture duplicate: all you need is to disable depth/stencil writes, and you can read them. Concluding, while DX creates and fixes the problems of its own with new versions, OpenGL just works as expected.

Documentation. The official source of knowledge about DX is MSDN. Unfortunately, you are not able to track all changes that go there. I don't see any revision history. My co-worker was following the CHM documentation bundled with our DirectX SDK, and according to it, CopyResource() can not be used if one of the surfaces is multi-sampled. He ended up copying a surface using a full-screen quad with a designated shader... And only then I discovered that the online version is different: for DX 10.1 the function actually copies multi-sampled surfaces too.

Another example is D3D11_RASTERIZER_DESC structure. There is a MultisampleEnable member, which (surprise!) affects only line rendering under DX10+, while affecting all MSAA rendering under DX10 and below. Yes, I know there is also AntialiasedLineEnable flag, but how does this make it any less confusing?

The situation gets worse as you dig deeper. As an example, there is a texture object in HLSL. According to MSDN, you have to explicitly specify the number of samples in the template. Though, I'm not sure, maybe the page is fixed while I'm typing this. Anyway, in practice, under DX10.1+ you can skip it. That's where you end up scavenging all little details of these presentations, scanning the forums, and trying to guess logically. The DX knowledge is like a secret cave with treasures, where some companies (Epic,Crytek) know them better then others.

OpenGL, on the other hand, provides a strictly versioned document. You can download any version of it, see the changes highlighted, and find everything you are looking for. You don't need to scavenge the forums: if something works differently from the specification, it's most likely a bug, and not a feature.

Sloppiness. You can do many things incorrectly, and DX runtime will still try its best to let your application work. For example, you can sample from a multi-sampled texture bound as a regular one. DX will automatically resolve the pixel before sampling, if it's possible. Or you can assign a float3 to a float in HLSL, and it will still work (can probably be fixed by a strict flag in HLSL compiler). Such robust behaviour is very welcome on the end-user side, but developers need to be sure the code is valid. I would prefer it to crash hard on the first error encountered, or at least return some error code (hello, OpenGL). I understand that, again, one can use the debug runtime, see the error log, and figure this out. But the truth is, most development goes with a regular runtime, because the debug one is damn slow. And even on that you would have to trace through the suspicious instruction to see the new error in the log - that's not how exceptions should be handled.


All in all, DirectX makes an impression of being made by amateurs, who got the power to talk to hardware developers. It's not developer friendly, it's not blazing fast, it's not something to compete with OpenGL. I admit that's a bit of an emotional over-statement, and one could expect something like that from me. I will continue learning DX technology, and I hope to discover some real gems there, if they exist at all.

Tuesday, July 24, 2012

KriWeb project future

Introduction

KriWeb is my hobby 3D engine, the 4-th incarnation of KRI technology. I've been working on it for the last half a year in my spare time. Recently, I finished implementing the heart of the concept - shader composing pipeline.

Technology

In short, shader compositor was designed to decouple rendering technique code from the material and mesh modifiers. The material provides a set of functions to the pixel shader, which are used by the technique shader code. The technique knows how to apply an arbitrary stack of geometry modifiers (e.g. skeletal animation, morphing, displacement). without knowing anything about the actual modifiers used by the entity. The shader compositor assembles all these parts together in a linked shader program, that is associated with the entity.

As an example, we can imagine a material that provides a pure BRDF function. A technique knows about scene lights, and uses this BRDF to evaluate lights contribution to a surface point. An underlying mesh gets modified by, to say, a skeletal animation. These pieces of functionality will be glued together automatically to display a shiny animated object for you. While the Demo already shows it working, a better one could be made to harness the full power of shader compositing.

Future

Now it is time for me to evaluate the path I made, and to figure out the vector of progression for the nearest future. With all KRI incarnations (as with most of existing hobby engines), there was always a big issue chasing me - the lack of application. I was dodging it as I could, but in the end the engine dies without an application. I don't want to see KriWeb old and weak after several years of development. If it is to die, let it die young, and remember its technology shining brighter than the sun.

In other words, I don't want to continue the development of KriWeb until the real application is found. It may be either my own new project, or a cooperation with someone, but it has to be something good. It's not like I have a lot of free time now - working at Rockstar is pretty close to a dream job, and my skills are needed there in full while making the next big thing. Cheers!

Wednesday, April 18, 2012

Revolution in Game Development


Hardcore gamers have been struggling to see good games in the past 10 years. With each new release, each new title, or a demo, I've been looking with hope that it can be something incredible. But, generally, there wasn't a single great game, just a couple of good ones instead. Today, all big titles are targeted at soft-core audience, because it's easier to make and sells good. Fortunately, this is going to change in 2013, and the roots of the revolution are visible now.
2013 will be the beginning of the next golden age of gaming. The epicentre of the last one was around 1998, and I'm sure there was at least one more before it in 80th. The reason for games to change is the revolution in relationship between the developer, users, and the publisher on the way to extinction. The key concepts of the new era are digital distribution and crowd funding. This revolution is happening today, and the leaders have already shown up:
1. Steam (2002): the flagman of digital game distribution built by Valve. Steam helps PC developers to sell the game, and advertise it, without having a publisher. Steam has also shown us that games don't need to be so expensive, and the price can go down faster after the release, especially if the game turned out to be not as good as advertised.

2. Humble Bundle (2010): demonstrated the effectiveness of pay-what-you-want business model applied to indie games. Plus the fact that copyright protection is undesired: both Humble Bundle and GOG service provide only DRM-free content. An interesting discovery was that Linux/Unix users are ready to pay more than Windows gamers.

3. Minecraft (2009): an original game that became popular in the open alpha state. Minecraft was not the first, but it was the brightest and incredibly successful example of the game sponsored by the live user community. People realized that they can not only pay for existing games, but also influence the future by investing in the ideas they like.

4. KickStarter (2008): a portal that connects game developers with gamers, who are ready to invest their money. Millions of dollars are gathered around ambitious projects, exceeding developer expectations by a large factor. It is the final link in a chain that leaves no place for big fat publishers. Well, except for console games... for now.

I'm calling everyone to sponsor the games you would really enjoy! This revolution will make 2013 a wonderful year of games, which would be able to compete with veterans of 1998. For a complete picture, here are the games I'm proud to support:

Tuesday, March 27, 2012

A Perfect Game


Computer games are substantial part of my life. They bring new ideas, unique experience, and challenge my tactics and reflexes. I always think about qualities of game in general, and try to judge existing games upon these characteristics. There is a game that reached my heart, and I would like to tell you about it.
I played all genres, with RPG being the favorite. I adore Fallout 1/2, Arcanum, X-Com 1/2, Unreal-1, Planescape: Tourment, Baldur's Gate, Jagged Alliance, MechCommander, and other classics of 90th. Since that golden age the overall depth of the content was quickly decreasing, while the appearance was getting more and more detail. These titles are well respected by a limited community, but the subject is not among them. The subject came out shadowed by the titans, and having unique language, enormous system requirements, and little-to-no advertising, was left with no chance to shine.


The game Vangers was created by pure geniuses from russian K-D Lab studio. It features unique voxel-based terrain engine, novel-level futuristic story, and a gameplay mixed between RPG/action/simulation. The world you are literally thrown in lives by it's own laws (unlike XCom/JA/MC, where everything is user-centric). You are not special there (unlike Fallout/Arcanum/Planescape/BG), in fact there is a thousand of others, who are faster, stronger, and even able to reach the story goals before you. The world behaves as a living organism: try to leave the control for a second, and you will notice swarms of little creatures flying, swimming, crawling in the terrain; other vangers rushing in a race competition; global world cycle changing from winter to warm summer... The landscape is dynamically persistent through the game: once destroyed a bridge in a crazy fight - and you have to look for another way to cross the river. The role-playing is based on your very actions, not on some digits in your stats. On the one hand, you are absolutely free to do anything there, even the suicide can be your game ending. On the other hand, there is a strong story line that keeps you fully motivated to explore.


The game didn't get proper reception. Some people love it, some hate it, others don't understand, or simply never heard of this gem. It could make a perfect MMOG, but it already has various multi-player modes, and the games are still hosted. I enjoyed *playing* the story, because this is the game play in its perfect sense. From that moment, I started looking at the real world with eyes of a vanger.

Saturday, February 4, 2012

Report: the end of 3rd development age

Year 2012 started with a new vision over technologies I want to use in order to achieve the same old goals. The 3rd age of my personal projects lasted for 2.5 years and is over now. I'd like to make a small overview over what is going to left in the past, and what are my new friends in the nearest future.



Boo, the dinosaur of the old age.
The major fault of Boo for me was its immaturity. Imagine developing a killer feature in your project and then getting "unknown compiler exception" after you changed tons of code without being able to compile. Next thing you do is spending a day in narrowing down the issue, providing the test case for the bug, and then the next day trying to work around it, "temporarily". That's the time you could spent doing something important for you, not for the language creator and community. I tried to patch the imperfections of generics implementation with smart AST macroses, and it was indeed fun.

The other faults come from the Boo main platform - .Net/Mono. I've met some serious inconsistencies and ambiguities there. For example, OOP polymorphism can be achieved in two ways: using virtual methods or via implicit interface implementations. I had to use both, because this is how platform is set up.

Portability was the last major issue. While you could safely copy binaries on Linux/MacOS and try to execute them, this didn't work smooth in practice, not to mention that tablets/phones where completely out of scope. Errors about some library of some version not being found on a target platform drived me crazy.

I still like it and will continue to use it in a Unity-based project. Working with Boo was an important stage in my professional growth, but it's time to move forward now.


Dart + WebGL, the ultimate portability solution.
OpenGL is developing too quickly for me. It's difficult to constantly adapt to new features and redesign the system. WebGL is much more stable. It can potentially work on any platform, without any platform-specific client code. Back in C times I could provide sources for you to build them locally and execute. In Net/Mono/Java times I could provide binaries that are likely to just work. In Web times I just give you the link...

The 4th iteration of KRI engine will be developed from scratch to work on WebGL. Everyone would be able to instantly see the result of my efforts without setting up any development environment. I've got much more experience now, worked with 3 different mature engines. I have a clear understanding of the goals and principles upon which the new engine structure should be built. Besides, the philosophy of not doing anything heavy on CPU and generating as much as possible fits WebGL very nicely.

Dart comes as a perfect replacement for JavaScript here. Cleaner code, better OOP and FP integration, and finally the opportunity to be in the first wave of new WebGL applications. I can't imagine doing anything serious with JS, and I like Dart very much so far.


Haskell, the global paradigm shift.
Meeting Functional Programming changed my mind, and I'll never be the same again. No, I'm not going to stop using imperative languages, but the way I look at the code is very different today. Haskell experience helped me to develop a vision of really clean and error-less code. For each piece of logic now I prefer to know exactly the input data and the result, removing all implicit and hidden flows. For example, any singleton or a static piece of data I consider dangerous.

Haskell is now the language of my AI experiments. While still learning, now I think on a higher level of abstraction, which simplifies the development and allows to concentrate on ideas more than on tools.


Conclusion.
Learning new principles is very beneficial. It's not just about new abilities, it's also about looking at old things under very different perspective. I've had a lot of fun with OpenGL-3 and Boo, they were a real step forward comparing to the my C/C++ only 2nd age. But even more fun waits ahead. New development principles, that I'll cary through the 4th age, are very promising, and I'll do my best to realize their potential in full.

Monday, January 9, 2012

Functional thinking

I started reading a book "Learn You a Haskell". It is truly wonderful, explaining complex things in a simple friendly way. Haskell itself seems to be the functional language. It's pureness makes you feel writing math  lemmas and theorems in a formal language, it is a very different feeling from C++/Boo I used to program in.

Surprisingly enough, functional programming makes us care about the goal, or a shape of the result. You answer questions like "What should it look like? What does it consist of?". At the same time, imperative programming involves asking the question "How?" most of the time.

As a first milestone and a practical task in learning Haskell I wrote the Burrows-Wheeler Transformation (BWT). I never thought it could be implemented in just 10 lines (not counting qsort), and remain nicely readable after that:

qsort :: (Ord a) => [a] -> [a]
qsort [] = []
qsort (x:xs) = qsort left ++ [x] ++ qsort right
where left = filter (<=x) xs
right = filter (>x) xs
bwt :: String -> (Int,String)
bwt input =
let buildMx [] _ = []
buildMx (x:xs) ch = (x:xs,ch) : buildMx xs x
mx = buildMx input (last input)
sorted = qsort mx
output = map snd sorted
base = head [ i | (i,(s,_))<-zip [0..] sorted, s==input ]
in (base,output)

Friday, January 6, 2012

Suffix sorting in linear time with no extra space

Intro
Suffix array construction (or, more generally, suffix sorting) is an important task of finding the lexical order of all sub-strings of a given string. It is heavily used in data indexing and BWT compression, which I've been doing for a long time to date.
First of all, it was surprising to me to discover that the subject is possible. By linear time I mean O(N) asymptotic execution time, where N is the input array size. By no extra space I mean that the only memory required is for the input array (N bytes), suffix array (4N bytes) and some constant storage (O(1) bytes).

Induction sort (SA-IS) is the key. Authors did a great work of exploring it. The major breakthrough was to use the induction to sort LMS sub-strings, in addition to using it afterwards to recover the order of all other strings. The only thing missing was a proper memory requirement for the algorithm. Authors claim 2N is the worst-case additional memory. Let's see how we can narrow this value down.

Proof
For an input string of size N, let's define M to be the total number of LMS-type suffixes. The memory (R(N,M) machine words) required for a recursive call consists of three sections:

  • Suffix storage = M
  • Data storage = M
  • Radix storage = K(M), where K(M) is the number of unique LMS sub-strings

R(N,M) = M + M + K(M) <= 3M/2

Now, let's split LMS sub-strings according to their length. There can be LMS of size 1. There are L2 of size 2, L3 of size 3, and so on. Let's now define M and N in terms of these numbers:
M = Sum(i){ Li }
N = Sum(i){ i * Li }

We know that LMS of different size can not be equal. Therefore, we can safely split K(M):
K(M) = Sum(i){ K(Li) } = K(L2) + K(L3) + ... + K(Li) + ...
K(Li) <= Li

We know that the maximum number of unique sub-strings of size 2 can not exceed 2^16, which is a rather small number. For the convenience let's name a=L2 and b=L3+L4+...= Sum(i>2){ Li }. It is the time to give a better upper bound to R(N,M):
M = a+b
N = 2L2 + 3L3 + 4L4 + ... <= 2L2 + 3(L2+L3+...) = 2a + 3b
R(N,M) = 2M + K(M) = 2a+2b + K(a) + K(b) <= 2a+2b + 2^16 + b = 2a+3b + 2^16 <= N+O(1)

Well, that's it. By carefully arranging the data in memory and allocating just 2^16 additional words we can perform a recursive call to SA-IS, and, therefore, construct a complete suffix array.

Code
My version of SA-IS is called A7. Advantages to Yuta's SA-IS implementation (v2.4.1):
  • 5N is the worst-case memory requirement. It doesn't depend on the input.
  • No memory allocation/free calls inside the sorting procedure.
  • Data squeezing: using smaller data types for recursion if range allows.
  • Cleaner C++ code with more comments.