Saturday, August 15, 2015

How to create a Magnetic Vector Field in Houdini - Tutorial

A few days ago I was chatting with a fellow colleague (Jon Parker, great Houdini Artist) about magnetic fields. So I decided to create a little OTL to generate a vector field starting from a pole cloud distribution and share the workflow, since it's really simple to implement and the results are great. I am not sure (actually I highly doubt) that the following implementation is physically correct in any way, I just wanted to have a way to quickly create a magnetic-looking vector field.

I generally use Wrangle nodes pretty much for everything since it's very quick to create, initialize attributes and modify their value with just a few lines of code. Feel free to use VOP nodes if you prefer a node-based approach.

For what concerns Volumes, I generally prefer to use VDB (when I am not forced to use Houdini Volumes). VDB have an optimized memory structure that allow a very fast access to voxels data, plus they are sparse so they have a lower memory foot print compared to Houdini Volumes. They are slightly more tricky to use, but the reward is huge in terms of speed and memory.

Let's start simple. Two points with a "f@pole" attribute that can be either +1 or -1. 

For mere visualization purpose I created a little setup where a sphere is copied (copy SOP) to each point ...

.. the sphere is RED if the pole is negative, and GREEN if the pole is positive (using a copy SOP and the stamp expression function).

Using a wrangle node and the stamp function in one of the parameters I can select the color of each sphere based on f@pole attribute.

EDIT : I was completely unaware of chv ( ... ) vex command, which allows to read directly a vector parameter from the UI  (thank you anonymous poster !!). Which means that the lines above can be written, much more elegantly, this way :
vector negcol = chv ("negativecol");
vector poscol = chv ("positivecol");

... your setup should now look more or less like this.

It's now time to create the Volume that will contain our vector field.
Let's create a VDB node, make sure to give the field a name (I used "magneticfield"), and set it to be a vector field.

This will create an empty VDB volume. By default a VDB volume is dimensionless. Why ? Cause VDB is a sparse volume, meaning that it exists only where we want to. Consequently we need to "activate" the VDB in the regions of space where we need it, before actually 'filling' it. 

In order to do that, we use "VDB_activate" SOP which allows to use geometry to activate the region. 

In this case I am using a bounding box, with some padding, generated directly by my point cloud.
This is what the node graph should look like roughly.
NOTE: don't forget to enable "Pack Geometry before Copying" in the Copy SOP, under Stamp section (later, when we'll use thousands of points, this option will make a huge difference).

And the viewport (note the empty VDB defined by the bounding box surrounding the poles):

There's one more step we should follow before starting to write our Volume Wrangle node. At the moment, we wouldn't be able to visualize the content of the Vector VDB cause of the nature of a vector field. For this purpose we can use the Volume Trail node. This node will sample the vector field in the points specified by the First Input , and draw curves following the vector volume connected in the Second Input.

I used the previously created bounding box to generate a Fog Volume, and scatter points within it. These will be the sample points used by the Trail SOP.
And this is what the graph should look like. Note that the sample points go in the input 1 of the Volume Trail SOP.
Now, we can't see any trail cause the vector field is empty. Let's test if it works filling the volume with a constant vector.

Your viewport should now look like this. 
Horizontal lines, matching the horizontal vector {1.0 , 0.0 , 0.0 }.

Now we have a good environment that will allow us to visualize and debug our code.

Let's now discuss how to calculate the magnetic field resulting by the 2 poles , for each voxel in the VDB grid.

Let's calculate the magnetic field generated by the poles p1 and p2 at the position v@P (the dark grey voxel visible in the picture above).
That Voxel will feel the negative influence of p1 (meaning , will be repelled from p1) and the positive influence of p2 (meaning, will be attracted towards p2). If we calculate the vector distance between the voxel and each pole position, of course we obtain a vector that is very small when the voxel is close to the point, and very big when it's far. We want the opposite. That's why, in our calculation we use the inverse of the distance from each pole, as a multiplier of the (normalized) distance vector. This way, the closer we are to the pole, the stronger is the attraction or repulsion as you can see from the function plot below.

d is red (no good, cause it gets bigger and bigger with the distance)
1/d is green (good cause it big close to the pole, and smaller and smaller far from the pole).

This was the trickiest part. The rest is pretty simple.
All we need to do is to store in each voxel the sum of all the normalized distance from each pole position, multiplied by the pole attribute ( to make sure the contributing vector is repelling or attracting depending on the pole ) and multiplied again by the inverse of the distance, as explained before.  

The pseudo code is the following:
  • For each Voxel position v@P
    • initialize a vector called VectorField = { 0 , 0 , 0 }
    • iterate through all the points (pi) in a certain radius from v@P
      • import the pole attribute of the point, pole
      • find the vector d between pi and v@P
      • find the normalized (nd) and magnitude (md) of vector d
      • add nd , multiplied by pole, and by the inverse of the distance md to VectorField
    • the magnetic field v@magneticfield in the voxel v@P is set to VectorField
Let's create a Volume Wrangle, and connect the Input 1 to the VDB volume , and the input 2 to the merge node containing the 2 points with the "pole" attribute.

Converting the pseudo code in VEX is probably easier than writing the pseudo code itself.

Now this is what the view port should look like:

Now we can replace the boring 2 poles setup with something more attractive.
How about ... simulating the magnetic field on the surface of the sun ? 

This picture I downloaded from Google Image is a good reference.

In order to recreate that look, all we need to do is scatter points on a sphere (about 2k), randomly assign f@pole to -1 or 1 and feed it into the simple setup we just created.

I find this pretty cool ! :)
Ok, I guess that's it.
If you like this tutorial, or found a better way to achieve the same result, please don't hesitate to comment.
Thank you for reading !

Since you read the whole article you definitely deserve the hip file.

It's better to use the inverse of the square of the distance, instead of the inverse of the distance. This will definitely give better results and less interference in the voxels far from poles.
Thanks to Jon Parker for this suggestion.


Monday, June 15, 2015

H14 - Point Wrangle vs Creep SOP

The following Vex code used in a Point Wrangle reproduces the behavior of the Creep SOP, and it's about 5 times faster.

I've used it to stick and move points on a NURBs surface.

Points To Stick --> Point Wrangle Input 1
Nurbs Surface --> Point Wrangle Input 2

vector newP=primuv(@OpInput2, "P", 0, set(v@P.x,v@P.y,0));
vector newN=primuv(@OpInput2, "N", 0, set(v@P.x,v@P.y,0));
The VEX command primuv is super fast.

This system works of course even with a polygonal object. Just make sure to create UV coords and N (using Facet Sop).

Wrangle nodes win again !

Thursday, July 17, 2014

Ink in Water

One way to achieve this effect is using a looot of particles advected by a velocity field generated by some gas simulation. Why not use the gas sim itself ? Cause in order to achieve the same detail offered by particles, the resolution of the gas grid should be so high that even Hal 9000 wouldn't be able to handle it or Sky-net would suddenly become self-unaware.

So I generated a simple smoke sim using H13 Smoke Solver. I didn't use Pyro cause it was probably overkill for such a simple sim. Then I wedged and cached to disk 10 versions of the smoke sim varying parameters like the turbulence amplitude and twirl radius (finding out what parameters to wedge is an art on its own).

Once I had 10 different smoke sims, I had 10 different velocity field sequences that I can use to advect my points.
Now, two ways to advect points in Houdini are:

  • create a POP system in a DOP Network and use the POP Advect by Volumes DOP node making sure to point it to the velocity fields cached on disk and re-imported somewhere else at the SOP level.
    PROs: POP land provides a large range of nodes to control particle motion.
    CONs: Slow
  • convert the velocity to VDB, merge the fields into one single VDB vector field (with VDB Vector Merge - don't worry if Houdini complains cause the grid components are different), then use VDB Advect Points in a SOP Solver to advect your points.
    PROs: Very fast.
    CONs: you've to do everything yourself.
    • PRO of this CONS: you CAN do whatever you want  :) !

When I mentioned thet the POP is slow, I still mean that it can process several hundreds of thousands of particles per second. The VDB solution can handle millions of points per second. Because of this, you can avoid rendering different versions of the particle sim and compositing them in order to smooth out the "particular" look (which is the main challenge of ink in the water effect).

So I decided to adopt the VDB solution.

In order to add extra points in the areas where the particles were more sparse, I added an Attribute Wrangle node in the end of the chain and I called it Fill Gaps. Since Attribute Wrangle works in the CVEX context, I am given the opportunity to add (or destroy) geometry.
The purpose of the script is basically adding points where there aren't.

This is the content of the script (make sure to create the proper UI parameters before sourcing the script) and feed the same point geometry both in input 1 and 2 of the Attrib Wrangle node.

Note: this fill gap algorithm is FAR from being perfect. Plus it just really fills the spaces between points larger than "mindist" within "seachrad". So the results might not be as expected. The good thing is that the added points remain surprisingly consistent during the animation, so , at least, you don't see crazy artifacts or points popping in suddenly, in the render.

float searchrad=ch("searchrad");
float mindist=ch("mindist");
int maxpoints=ch("maxpoints");
int fillpoints=ch("fillpts");

vector clpos;
int handle=pcopen(@OpInput2,"P",@P,searchrad,maxpoints+1);
int i=0;
    if (i==0) // the first point found should be the closest, in this case, itself. We want to skip it.
    if (length(@P-clpos)>mindist)
        vector pointstep=(clpos-@P)/(fillpoints*2+1); // this ensures there are no duplicate point
                                                                                  // at the cost of doubling the fill points number
        for (int t=0;t<fillpoints;t++)

After rendering 10 exr sequences, I imported them in Nuke and used them as sprites on a particle distribution scattered over a large plane, and rendered with DOF. Kinda slow actually, and the DOF in comp is always a PITA (has this improved in the last 15 years ? nope!).

This is the result.
Oh, there is a card disappearing in the last 10 frames, probably one of the Nuke particles ran out of fuel.

Ink in Water - test from Alessandro Pepe on Vimeo.

Thank you for reading !

Thursday, May 29, 2014

Houdini - Explicit Cache

Classical Scenario:
You deadline is today at 7pm. Which means your deadline really is tomorrow around 9pm ! You've time to recalculate your huge flip sim that takes about 13 hours to complete (note, not 12 hours, not 14...13 the evil number). Each bgeo file is about 666mb. You hope you'll have enough HD space, but hey ... HD space is never enough ! Even Confucius knew this.
So you cross your fingers, and hit "Render" on your rop_geometry network to start your 999 frames FLIP sim.

Now usually, when you've only one chance to do it right and if you miss it you'll be fired, usually this is what happens in order:
1 - you run out of HD space on the frame 998.
2 - Linux crashes for the first time in 12 years
3 - Houdini crashes for the 13th time in 2 hours
4 - power outage in the whole <city where you are in that moment>

You better remember that one of these 4 things (if not all of them) will happen. It's important to be positive.

How will our fellow FX Artist save his job, and manage to pay the rent of his mansion with pool in the Hollywood Hills ?

The answer is this gorgeous little gem of pure love called "explicit cache" and his little friend "explicit frames to cache" on the DOP Network node.

This option is off by default, cause the files generated can fill your 12Tb HD very quickly.
But this is where the second option comes really in handy !
You don't have to save ALL the .sim files for each one of your 999 frames of simulation.
You can save the last, say, 5 (or less maybe) and restart the sim from the smallest frame number in your cache ! This way you'll not clog your HD with unnecessary GIGANTIC .sim files, and you'll still be able to resume your sim.

n.4 happened. And I managed to simulate only 362 frames of my 999 frames sim. But I was wise enough to enable the explicit cache option and specify a frame history of 5.
So, if I go in my ...../simcache directory I'll see these 5 files:

What I usually do is delete the last one. Why ? Well ... I love Houdini, but I will never trust that he managed to write out the last sim file while the power of my computer was off. And you've a pretty good clue of this checking the file size of the files. They are all ~153 Mb, apart from the last one ! MM ... suspicious. Delete !

Perfect. Now you have 4 cache files and you're sure they are fully functional.

Now you can restart your sim starting from the frame 358 and Houdini will seamlessly continue simming like if you started from frame 1.

This saved my <censored> several times already.

Friday, July 19, 2013

(just another) Houdini FLIP water sim

Houdini fountain water sim - FLIP solver test from Alessandro Pepe on Vimeo.

A few days ago I realized the world couldn't survive without my contribution to the countless water simulation tests out there. Honestly this is not better or worse than many others but I had so much fun working on it.
I spent about 5 days overall between setup and render.

The workflow is pretty much the same explained in the SideFX Waterfall tutorials, with some minor changes in the water shader, and the creation of a wet map. Furthermore I paid extra attention to the bubbles underwater.

For the wet map I used Attribute Transfer SOP to transfer the "wetness" attribute from the particles generated by the Flip Simulation (after caching only the ones close to the fountain walls) to a dense particle object scattered on the fountain. The attribute transferring was performed in a DOPs Network (via SOP Solver), in order to preserve the previous state.

Initially I created 2 different wet maps:
  • WETNESS - This is a wet map that dries very quickly and reveals only the water spec on the fountain shader.
  • DARKNESS - This wet map is identical to the previous one for what concerns the shape, but dries way way slower and is in charge of keeping the fountain shader just darker.
Eventually I ended up using only the "DARKNESS" wetmap cause the light position didn't show any wet part of the fountain revealed by the water, (unfortunately) so ... useless ! But I thought it was a good idea to illustrate both wetness (in cyan) and darkness (in blue), in the pass below.

Houdini Wet map from Alessandro Pepe on Vimeo.

Monday, June 24, 2013

stuff growing and crawling - Venations - HDK version

I know, it's time to move on, enough with this growing and crawling stuff, but ... I just can't ! It's too much fun to see those tentacles growing and reaching out drawing crazy shapes in the virtual space. The problem with the previous version was that ... the algorithm is quite heavy because it relies upon two nested loops (for each seed, search for the closest root) : it was too slow.
So I decided it was about time to learn a bit of HDK and implement a prototype in C++.
This version is about 30 times faster and allows a real time feedback for way more complex structures than the OTL version.
In this video I am illustrating how to use this SOP node to create a simple tree and in the second part of the video , how to paint venations on a polygonal head.

Venation System - HDK version - demo from Alessandro Pepe on Vimeo.

The venation algorithm is based on this paper by Adam Runions.

Wednesday, June 12, 2013

HDK Learning Notes - quick start on Linux

(tested on Linux - CentOS 6.4 and Ubuntu 13.04)

Install compiler and libraries (as mentioned here)

$ sudo yum install tcsh gcc-c++ mesa-libGL-devel mesa-libGLU-devel Xi-devel
(if yum cannot find Xi-devel, try libXi-devel)
$sudo apt-get libgl-dev libglu-dev libxi-dev

To initialize houdini environment variables and commands:

$ cd /opt/hfs12.5.371/
$ source houdini_setup
The Houdini 12.5.371 environment has been initialized.

Now you'll be able to use the command houdini, houdinifx, hcustom, etc.
The command hcustom is a little wrapper to g++ that will take care of searching for houdini sdk headers and linking with the correct libraries:

$ hcustom SOP_mynode.C

... if we need OpenVDB (see below for installing OpenVDB headers)

$ hcustom -I /tmp/OpenVDB/include/

When you try to compile the first time you might get this error:

/usr/bin/ld: cannot find -lXi

which might be related to the fact yum wasn't able to install the library Xi-devel in the first step. In that happens, try this:

sudo yum install libXi-devel


If you plan to use OpenVDB you've to install the OpenVDB headers , cause they are not shipped with Houdini ( so far. 
Download the library and unzip it somewhere (for instance in /home/$USER/Download).

$ cd /home/alex/Download/openvdb
$ make clean
$ make install

... this will install the libraries into /tmp/OpenVDB

You might get the following errors:


io/ error: zlib.h: No such file or directory
  • edit the file /openvdb/io/
  •  change the line
    #include <zlib.h>
    #include </opt/hfs12.5.371/toolkit/include/zlib/zlib.h>

cmd/openvdb_view/Viewer.h:50:21: error: GL/glfw.h: No such file or directory
  • edit the file /openvdb/Makefile
  •  search for line starting with

    and remove vdb_view (it's on the same line of "install :")

/bin/bash: doxygen: command not found
  • $ sudo yum install doxygen