Saturday, December 26, 2009

Take a step back to take a step forward.

One of the original reasons I made this blog was to document work that I did, and to create a record of my process for creating visual art that I wanted to make. As such, one of the stipulations I try and put on every post is that I have to include a picture.

That's actually a fairly difficult requirement, since a lot of what I'm working on is in fact non-visual in nature, or involves a lot of research and abstract coding to finally produce a single image. Even if my end result is visually oriented, it's not always easy to create an image that illustrates the actual work that I'm doing.

In college, I used to typeset all of my homework, and towards the end of my undergraduate degree I figured out how to produce diagrams using MetaPost. The amount of work involved was tremendous, even to produce relatively simple figures. But at the same time I think it forced me to reflect on how to communicate the material at hand, using both linguistic and visual techniques. Since I was often learning these ideas at the same time I was trying to convey them, I feel I had to think on them more and in the end gain a deeper understanding of them.

To that end, I've been recently trying to resurrect this process that I've used while I study new topics that bring me closer to creating Jarvis. In particular, this diagram is the very first I've produced while relearning MetaPost. It's for the exercises from a textbook on computer vision, which I'm hoping to use to implement control systems. It's used to illustrate a simple derivation of projection equations for a point onto an image plane located in front of a camera's pinhole.

Sunday, August 2, 2009

Hatchety hatch hatch

One of the side projects I've been wanting to working on to help me learn non-photorealistic rendering is a bit of a game. The idea is to choose a particular artist, and see if I can generate images that look like they were produced by that artist using NPR techniques.

The first artist I chose was Edward Gorey, and this is my first attempt to write a shader which recreates his cross-hatching style of pen-and-ink drawings:

Hatch shader, first attempt

This was actually also my first attempt to actually really write a shader at all in the RenderMan shading language. There were some technical challenges and I figured a few things out.

The shader uses a standard Lambertian shading model to calculate an ink intensity value on the surface of the model. This intensity is used as an input to a procedurally generated dither screen, which essentially produces parallel lines of increasing frequency as intensity increases.

Because it uses raster space coordinates to index into the dither screen, it's easy to control the parallel quality and uniform spacing of the lines, thus making it easier to generate an appropriate average value over a given area. However, the downsides to this are that it's difficult to generate hatching which follows contour lines. It's possible to use the parameter values on the surface to easily generate contour lines, but then it becomes difficult to control spacing and hence value, due to the non-uniform distribution of the parameter coordinates over the surface.

I'm not sure of a good way to handle this trade-off, and I think I may need to do it using global information that isn't available to a shader as it's executing. I'm hoping to address this by using Aqsis to generate OpenEXR images which contain other useful data in their channels (eg, surface normals or tangents), and write custom renderers to composite these output images however I want.

Thursday, July 16, 2009

But does it have to be so wangy?

Only 5 months between posts, I'm on a roll. It's hard to remember everything that I've done to actually make these two images.

This is the first round of an actual 3D rendering of the octopus leg that I was designing in January.

Side View of Rendered Leg

3/4 View of Rendered Leg

First off, I finished off wrapping all the Renderman functions for NURBS, patch surfaces (and subdivision surfaces, too, IIRC). I think pretty much all that leaves is functions which require passing in a pointer to a C function. Those are going to be a little tricky, but I have some ideas.

Next up, I took the two curves I drew in January and converted them to a parametric surface representation. Since one of the curves (the cross section) was actually two curves, one the mirror of the other, I had to do some tricks to convert it into one curve. I had hoped to figure out a method for concatenating arbitrary NURBS, but could only come up with a solution for the case where the endpoints are coincident and the end knots have full multiplicity. I think a general case might be possible, but I didn't want to spend too much time on it unless I really needed it.

Finally, when generating the test images, I found I needed to be able to position the camera arbitrarily. It's easy enough to come up with a basis matrix for a camera at a given position and looking at a certain point, but Renderman expects the inverse of that matrix. I guess it was also easy enough to generate an inverse of that matrix since the basis is orthonormal, but I figured I would probably need full matrix inversion at some point, so I decided to write a full matrix inverse routine. That turned out to be a lot harder than expected.

End result: I can draw arbitrary curves and surfaces, and position the camera wherever I want. Next up: figuring out shaders so I can draw something other than this lovely shade of matte blue, or figuring out animation so I can actually have something that, y'know... moves.

Saturday, February 21, 2009

Hard Won Star

Star
This simple star represents about a month's worth of work. It was produced using Aqsis, an open source Renderman renderer. The actual program to make the star is actually quite simple. Almost all of the work was actually because I wrote a Python wrapper around the Aqsis library so that I could make Renderman calls from Python directly.

The tricky part in writing the wrapping code was that the Renderman C interface makes extensive use of C's variadic ... style of passing arguments for passing in extended parameter information. Luckily, the Renderman interfaces also specifies the Ri*V style of calls for passing parameters across, without which the wrapper would have been pretty much impossible to write.

All parameters passed in this way are represented using void pointers to low level C types, eg, floats, ints, etc. In particular, when passing in a list of elements, the underlying type actually passed across the interface is generally an array of floats, or ints, or whatever, which are all the same type and contiguously stored in memory. Of course, Python makes no such guarantees about how it stores its types, or that types in the same list are even the same type. The problem then, is converting whatever Python types are passed in into void *'s that can be safely passed across the Renderman C interface.

The way this problem was solved was to essentially maintain map of dynamic conversion objects, which we can clone and allow us to dynamically (and virtually) convert the Python type to the underlying C type, as well as doing any copying, conversion, and memory allocation necessary to store the converted values for the life of the Renderman function call. Another map is maintained which maps the names of Ri function names and parameters to the types that they expect, configured by an external XML file. The Python wrapper can then lookup the necessary types to convert to at call time, perform any conversions necessary, and safely make the underlying C call.

The end result is I can iterate a lot faster. The actual program to generate this star only took a few minutes to write. I'm hoping to get curved surfaces exposed soon and can thus start trying to make higher quality renderings of Jarvis' legs.

Sunday, January 11, 2009

First Images

These are the first images to come out of the program I've been working on. It's not much, but it's a start.

These are two examples of NURBS curves. The first represents a cross section of an octopus leg. The leg is in white; the red circle is a reference I used when manipulating the curve to the shape I wanted.

The second curve is a curve that the cross section will be "extruded" along. It's not apparent due to scaling, but the second curve is a planar curve which is about 8 units long and exactly one unit high (the red lines represent the lines y = 0 and y = 1, respectively). The cross section isn't going to be extruded in the true sense of the word. Instead, the height of the second curve will determine the scale of the cross section as it "moves" down the curve.

Cross Section for Octopus Leg

Scale Definition for Octopus Leg

The curves were created by manually specifying the control points using a custom written C++ NURBS module, which was then exposed as a Python module using Boost.Python. The image captures were then made using Boost.GIL.

The next step is to actually create a NURBS (or Bezier) control patch for the surface using the cross section and control curve. Hopefully, since I'm using NURBS, I'll eventually be able to produce high quality renderings (via Aqsis) as well produce a model of the octopus for realtime rendering purposes.