Holy crap, two posts in one day. Anyway, I think I need to write some matrix transformation algorithms so that all my images don't start in the lower left and end in the upper right. Such is the difficulty of writing curves out by hand.
This is a Coons' patch made of simple one segment Bezier curves. With the new framework it was actually fairly simple to adapt the patch description and shade semi-interestlingly, and it should be possible to generically insert new curves. But the intresting part is that this is the first thing that I've really treated as a 2-dimensional surface and will probably form more of a basis for the things I want to render. I'm pretty close to being able to actually introduce some new tricks that I've been thinking of for doing some interesting shading techniques, but I really may have to start looking at performance.
Tuesday, December 28, 2010
Seaweed
So, what's not apparent about this image is the amount of work that went into refactoring my image rasterization framework to essentially be able to handle arbitrary C++ types as vertices and to now be able to use arbitrary function objects which can shade meshes composed of said vertices. I also did some refactoring so that I could take arbitrary one dimesional curves and two dimensional surfaces and subdivide them to feed them to the rasterization algorithm.
Most of the new work is apparent as the repeated gradient texture on the Beziers that make up this image, which wouldn't be possible without the ability to write an arbitrary shader. It turns out the hardest part of doing this was that I wanted to treat each little quadrilateral that the subdivision algorithm spits out as a tiny rationally bilinearly interpolated patch, which then interpolate the vertex values that get fed into the shader, but working out the mathematics to figure out how to correlate sample indices to actual parameter values on that patch in a numerically stable way was pretty tricky.
My next step is to try and get arbitrary shaders working on Coons' patches. I actually had them worked out but could only shade them with a solid color, which was fairly uninteresting. Performance has already degraded a bit though, so I think that might actually be what I have to take a look at next.
Sunday, October 10, 2010
Beziers!
Over the last couple of weeks I did a complete rewrite of my little rasterizer program to be able to better handle the different types of shapes that I'll be drawing with. The algorithm is now organized around being able to efficiently super-sample just about any type of primitive without using too much memory, and then aggregate the super sample set into an antialiased lower resolution image. The nature of the algorithm should also make implementing new primitive types (and possibly procedurally generated primitives) simpler. To test it out I added a new primitive type, Bezier curves!
I also published all my code onto GitHub, for backup purposes, and perhaps sharing purposes one day if anyone actually becomes interested.
http://github.com/bracket/rasterizer
I also published all my code onto GitHub, for backup purposes, and perhaps sharing purposes one day if anyone actually becomes interested.
http://github.com/bracket/rasterizer
Saturday, September 18, 2010
nyyyyyyyyyyyyyyyeargh
Between last posts I've left my old job, moved to Portland, and started a new a job. It feels like it's been way less than 8 months. Anyway, I was kind of stagnating on doing anything interesting image-wise for a while, and was kind of feeling dumb about it.
A friend of mind pointed me at the demo-scene the other day. The idea behind it is to create procedurally created images and videos where the entire compress executable size is less than 64kb, which is tiny in terms of modern computing storage.
The idea intrigued me enough to give it a shot, so this weekend I started a fast and dirty renderer framework to make static images. And here's the first one! With some nice anti-aliased lines and everything. I'm pretty happy 'cause I'm pretty much responsible for every single pixel in this image. Next up: Beziers or NURBs and maybe animation.
Saturday, December 26, 2009
Take a step back to take a step forward.
One of the original reasons I made this blog was to document work that I did, and to create a record of my process for creating visual art that I wanted to make. As such, one of the stipulations I try and put on every post is that I have to include a picture.
That's actually a fairly difficult requirement, since a lot of what I'm working on is in fact non-visual in nature, or involves a lot of research and abstract coding to finally produce a single image. Even if my end result is visually oriented, it's not always easy to create an image that illustrates the actual work that I'm doing.
In college, I used to typeset all of my homework, and towards the end of my undergraduate degree I figured out how to produce diagrams using MetaPost. The amount of work involved was tremendous, even to produce relatively simple figures. But at the same time I think it forced me to reflect on how to communicate the material at hand, using both linguistic and visual techniques. Since I was often learning these ideas at the same time I was trying to convey them, I feel I had to think on them more and in the end gain a deeper understanding of them.
To that end, I've been recently trying to resurrect this process that I've used while I study new topics that bring me closer to creating Jarvis. In particular, this diagram is the very first I've produced while relearning MetaPost. It's for the exercises from a textbook on computer vision, which I'm hoping to use to implement control systems. It's used to illustrate a simple derivation of projection equations for a point onto an image plane located in front of a camera's pinhole.
That's actually a fairly difficult requirement, since a lot of what I'm working on is in fact non-visual in nature, or involves a lot of research and abstract coding to finally produce a single image. Even if my end result is visually oriented, it's not always easy to create an image that illustrates the actual work that I'm doing.
In college, I used to typeset all of my homework, and towards the end of my undergraduate degree I figured out how to produce diagrams using MetaPost. The amount of work involved was tremendous, even to produce relatively simple figures. But at the same time I think it forced me to reflect on how to communicate the material at hand, using both linguistic and visual techniques. Since I was often learning these ideas at the same time I was trying to convey them, I feel I had to think on them more and in the end gain a deeper understanding of them.
To that end, I've been recently trying to resurrect this process that I've used while I study new topics that bring me closer to creating Jarvis. In particular, this diagram is the very first I've produced while relearning MetaPost. It's for the exercises from a textbook on computer vision, which I'm hoping to use to implement control systems. It's used to illustrate a simple derivation of projection equations for a point onto an image plane located in front of a camera's pinhole.
Sunday, August 2, 2009
Hatchety hatch hatch
One of the side projects I've been wanting to working on to help me learn non-photorealistic rendering is a bit of a game. The idea is to choose a particular artist, and see if I can generate images that look like they were produced by that artist using NPR techniques.
The first artist I chose was Edward Gorey, and this is my first attempt to write a shader which recreates his cross-hatching style of pen-and-ink drawings:
This was actually also my first attempt to actually really write a shader at all in the RenderMan shading language. There were some technical challenges and I figured a few things out.
The shader uses a standard Lambertian shading model to calculate an ink intensity value on the surface of the model. This intensity is used as an input to a procedurally generated dither screen, which essentially produces parallel lines of increasing frequency as intensity increases.
Because it uses raster space coordinates to index into the dither screen, it's easy to control the parallel quality and uniform spacing of the lines, thus making it easier to generate an appropriate average value over a given area. However, the downsides to this are that it's difficult to generate hatching which follows contour lines. It's possible to use the parameter values on the surface to easily generate contour lines, but then it becomes difficult to control spacing and hence value, due to the non-uniform distribution of the parameter coordinates over the surface.
I'm not sure of a good way to handle this trade-off, and I think I may need to do it using global information that isn't available to a shader as it's executing. I'm hoping to address this by using Aqsis to generate OpenEXR images which contain other useful data in their channels (eg, surface normals or tangents), and write custom renderers to composite these output images however I want.
The first artist I chose was Edward Gorey, and this is my first attempt to write a shader which recreates his cross-hatching style of pen-and-ink drawings:
This was actually also my first attempt to actually really write a shader at all in the RenderMan shading language. There were some technical challenges and I figured a few things out.
The shader uses a standard Lambertian shading model to calculate an ink intensity value on the surface of the model. This intensity is used as an input to a procedurally generated dither screen, which essentially produces parallel lines of increasing frequency as intensity increases.
Because it uses raster space coordinates to index into the dither screen, it's easy to control the parallel quality and uniform spacing of the lines, thus making it easier to generate an appropriate average value over a given area. However, the downsides to this are that it's difficult to generate hatching which follows contour lines. It's possible to use the parameter values on the surface to easily generate contour lines, but then it becomes difficult to control spacing and hence value, due to the non-uniform distribution of the parameter coordinates over the surface.
I'm not sure of a good way to handle this trade-off, and I think I may need to do it using global information that isn't available to a shader as it's executing. I'm hoping to address this by using Aqsis to generate OpenEXR images which contain other useful data in their channels (eg, surface normals or tangents), and write custom renderers to composite these output images however I want.
Thursday, July 16, 2009
But does it have to be so wangy?
Only 5 months between posts, I'm on a roll. It's hard to remember everything that I've done to actually make these two images.
This is the first round of an actual 3D rendering of the octopus leg that I was designing in January.
First off, I finished off wrapping all the Renderman functions for NURBS, patch surfaces (and subdivision surfaces, too, IIRC). I think pretty much all that leaves is functions which require passing in a pointer to a C function. Those are going to be a little tricky, but I have some ideas.
Next up, I took the two curves I drew in January and converted them to a parametric surface representation. Since one of the curves (the cross section) was actually two curves, one the mirror of the other, I had to do some tricks to convert it into one curve. I had hoped to figure out a method for concatenating arbitrary NURBS, but could only come up with a solution for the case where the endpoints are coincident and the end knots have full multiplicity. I think a general case might be possible, but I didn't want to spend too much time on it unless I really needed it.
Finally, when generating the test images, I found I needed to be able to position the camera arbitrarily. It's easy enough to come up with a basis matrix for a camera at a given position and looking at a certain point, but Renderman expects the inverse of that matrix. I guess it was also easy enough to generate an inverse of that matrix since the basis is orthonormal, but I figured I would probably need full matrix inversion at some point, so I decided to write a full matrix inverse routine. That turned out to be a lot harder than expected.
End result: I can draw arbitrary curves and surfaces, and position the camera wherever I want. Next up: figuring out shaders so I can draw something other than this lovely shade of matte blue, or figuring out animation so I can actually have something that, y'know... moves.
This is the first round of an actual 3D rendering of the octopus leg that I was designing in January.
First off, I finished off wrapping all the Renderman functions for NURBS, patch surfaces (and subdivision surfaces, too, IIRC). I think pretty much all that leaves is functions which require passing in a pointer to a C function. Those are going to be a little tricky, but I have some ideas.
Next up, I took the two curves I drew in January and converted them to a parametric surface representation. Since one of the curves (the cross section) was actually two curves, one the mirror of the other, I had to do some tricks to convert it into one curve. I had hoped to figure out a method for concatenating arbitrary NURBS, but could only come up with a solution for the case where the endpoints are coincident and the end knots have full multiplicity. I think a general case might be possible, but I didn't want to spend too much time on it unless I really needed it.
Finally, when generating the test images, I found I needed to be able to position the camera arbitrarily. It's easy enough to come up with a basis matrix for a camera at a given position and looking at a certain point, but Renderman expects the inverse of that matrix. I guess it was also easy enough to generate an inverse of that matrix since the basis is orthonormal, but I figured I would probably need full matrix inversion at some point, so I decided to write a full matrix inverse routine. That turned out to be a lot harder than expected.
End result: I can draw arbitrary curves and surfaces, and position the camera wherever I want. Next up: figuring out shaders so I can draw something other than this lovely shade of matte blue, or figuring out animation so I can actually have something that, y'know... moves.
Subscribe to:
Posts (Atom)