top of page
About

About / Context:

Envy is a distributed rendering and caching tool I created to be used by my peers at Gnomon.

  • rendering with: Vray, Arnold, Redshift, Karma, and Renderman.

  • rendering USD scenes with any Hydra delegate through SideFX's Husk utility.

  • caching Houdini simulations.

At Gnomon we don't have a dedicated render farm or automatic rendering solution. Gnomon also doesn't allow students to remote into or control computers over the network (This rule is pretty nebulous, but I think the intention is to keep network traffic low). Also, because Gnomon is an environment where students can sit wherever and work on any computer, there is no guarantee that a computer you logged into at 9am to render will not be taken by someone else to work at noon. 

Note: In the time since I created Envy I rewrote it and put it on Github check it out here!

Goals:

When I began work on Envy I needed to take into account the environment and limitations we have at Gnomon and made this list of criteria:

  1. Be as unobtrusive as possible

  2. Keep network usage to a minimum

  3. Prioritize Stability. I would rather a render crash then to lose the ability to communicate with an Envy Instance

Process:

All these criteria molded the final product, but not being able to directly communicate with computers was for sure the largest hurdle to overcome. My solution was to leverage the personal network drive partition each student is assigned (Called the Z:/ drive). When Envy is running it periodically checks an "Instruction File" located on the students Z drive and executes whatever commands it finds. Because the user may not always want all computers to execute a command a quick regex check is performed to verify that "I" should run this command. So to tell Lab3 computer 3 to render it would look like: lab3 03 render() or to tell all computers to say "hello world" it would look like: * print('hello world'). I created a command line interface to act as an intermediary between the user and the instruction file which does some basic syntax checks and can help format more complicated commands. I call this Command line interface Envy IO.

There were some pros and cons to working this way. Because each Envy instance is its own self contained "thing" I didn't have to worry about other sign-outs affecting some sort of larger network. A major con though was no computer knew about the existence of any other computer.

To solve that problem each Envy Instance creates a ping log which it periodically updates. other Envy Instances can then get the modified time of that file to determine if a computer has been signed out or not. A bit janky but it works! The logs look kind of like this:

Envy Logs

Envy also has a method of managing its child processes which I have called the Process Handler. It leverages the psutil and subprocess python modules and allows envy to run other programs or processes asynchronously while keeping Envy responsive to commands. Each child process Envy launches generates its own set of logs to allow for easier remote debugging.  

Rendering:

Rendering with Envy is a matter of exporting the render engines own file type (.RS for redshift .vrscene for Vray etc.) and pointing Envy at whatever directory those files are inside of. Envy will then launch the appropriate command line renderer and mark that file as in progress so duplicate frames are not created. Those files which I will refer to as IFD's (for Instantaneous Frame Description) are marked completed instead of deleted. That way if the user needs to re-render for any reason, they don't need to go through the process of exporting out new IFD files. I have tools for the user that allow for easy resetting of completed files. Its a bit hard to show in action, but in the GIF below you can see 5 computers rendering a series of .RS and .vrscene files

RenderingFiles.gif

I also implemented a progress monitor which allows you to view the progress of each of your renders from a central location. It refreshes in scanlines to reduce flickering!

USD Integration:

For rendering USD scenes Envy leverages the SideFX Husk utility. This was the simplest way I could implement every Hydra render delegate simultaneously. The process of preparing your USD stage to be rendered with Envy is strait forward:

  1. setup your stage as normal

  2. ensure your render settings prim is at the root of your hierarchy and is named rendersettings (Husk requirement)

  3. ONLY for your top level .usd file (The one specified in your usd rop if you are using Solaris) ensure that it will write out a separate usd file for each time sample you want to render. This is how Envy will know how many frames to render.

Following these steps you should have a complete USD stage and then a series of files which act as pointers to different time samples. Envy will then take those files and create a series of json files with all of your husk render arguments encoded.

Caching

Envy supports distributing a Houdini cache job per computer. The process is pretty simple.

  1. Use my custom HDA to encode what button you want to press into a json file 

  2. You can optionally encode parameter changes you want made for that job into the json file as well. 

  3. Envy will then take that json file and communicate with a Hython instance to do whatever you specified in that json file.

My custom HDA (NV job submitter) gives users an easy drag and drop interface to specify what to cache and any desired parameter changes. After some feedback I added two features:

  1. A duplicate job button to make it easier to... duplicate jobs.

    • ​This was actually surprisingly annoying to make. From the research I did I couldn't find a way to make a subparameter of a multiparm and get the handle of it at the same time. So this button creates the new necessary parameters and then iterates through all of the parameters to just get the handles of the ones it just created​​​. It can then set those parameters to what they are supposed to be. (dealing with multiparms kind of sucks)

  2. A custom token ($NVJ) to make it easier to increment parameters for each job. This token starts counting at one and corresponds to the current job number.

    • This was pretty strait forward to implement. I pretty much just replace the $NVJ substring with the current iteration of the job. Evaluate the parameter and store that. Then I set it back to what it was pre substituting the token.​

This isn't the most featured wedging system. However because its able to press any arbitrary Houdini button and set any arbitrary parameter to any value the system is really powerful. Below is a demo of what its like to use the HDA.

Other HDA's

I created some other HDA's which can leverage Envy.

NV cache:

This HDA is a modified file cache node with some added features:

  • When given as a target for my NV job processor HDA a descriptive json file will be saved in the cache directory. This descriptive file contains information about what parameters were changed with this version as well as the job creation time and current HIP file.

  • When reading back a cache which has a descriptive file James Robinsons linewriter HDA is used to display the json information to the user.

  • There is also a set to latest version button which will automatically version the node as to not overwrite existing versions.

  • All existing file cache functionality is maintained.

NV flipBook:

This HDA is to make flip booking a little bit easier, it really only has one feature.

  • When the user attempts to render over existing files it will prompt the user if this is intended or if the user would like to version up.

  • I plan on adding the ability to batch export flipbooks soon.

Distribution

This is cool and all for me, but I wanted to help my peers as well. Getting people to use Envy turned out to be harder then actually making it. I learned very quickly that a tool that isn't easily usable isn't a tool that people use. So I created some documentation as well as a discord server for people to report bugs or desired features. The documentation looks like this:

I have all Envy's code on one of Gnomon's central servers. For installation each user simply needs to grab the python files which point to that central repository. This allows me to keep my code in a central place to act as a primitive sort of version control (Gnomon doesn't have git), while also allowing users to customize their own configurations and even write their own custom functions for envy. The main repository looks like this:

Envy Repository

And each users Envy application folder looks like this:

Envy application folder

To stop people abusing the system I implemented an Envy lockdown check that each envy instance runs on startup. It checks if the user is blacklisted, how many computers the user currently has, and what time it is. If its before 10 pm (which means classes are currently running) the user is limited to 5 computers. After 10 pm they can take as many as they like. I'm able to change these limits per user. Users can write custom functions in their IO_functions.py or Envy_Functions.py file. These files are imported into the users envy instances as IO and NV respectively so the user could call a custom function like: IO.doSomething() or * NV.doSomething(). I gave an example of how to implement custom functions in each of those files and it looks like this:

bottom of page