Introduction
This documentation is intended for Pipeline TDs or artists skilled with Hython who are looking to build custom tools for submitting to GridMarkets (GM). The v2.0 pipeline API is being developed with the specific use cases for studio TDs in mind. The pipeline code is closed source, aside from a handful of modules which are intended to be the interface for you to work with to directly modify how/what is processed on your system. There may be times that adding information to these modules is required to make new features you design work with the pipeline, but everything should be covered within them. This document is intended to be a starting point for development. Please use a dir() function on the modules to interrogate member functions and properties and experiment with what is output.
Here is a short list of what will be focused on in this documentation:
- General API architecture for data flow
- Caveats and information regarding the current pipeline state
- Classes provided in the API for inheritance when developing custom submission solutions
- Helper functions provided within the API that will be used to simplify development
- A brief example of a minimal submission tool which is capable of submitting multiple jobs to the GM platform
This document is a pre-release alpha version. The pipeline is in the early stages of active development and many of the parts are in a constant state of flux. Please use the information presented here with caution and interrogate the library to ensure that all functions mentioned and requirements for custom classes are still complete. Things may be modified, deprecated, or removed at any time. We will do our best to update this doc when things change. Once development is in a more solid state there will be official permanent documentation compiled and provided.
Pipeline Architecture
The architecture for this pipeline is designed to have data flow in a single direction. The general flow stages are as follows:
- Node level (contextualized)
- Job level (contextualized)
- Project (scene wide singleton)
- Preflight (separate environment)
- Submission (separate environment)
Generally speaking, when developing new tools, the final two stages should not be of concern. The purpose of stages 1, 2, and 3 are to collate and structure all data from a scene into the correct and digestible format for the final two stages. As long as they are constructed properly the final two should process without issue.
The first two stages comprise the bulk of the work that needs done to develop new tools. Both of the interfaces for these stages are set up as Abstract Base Classes (ABC) and have a handful of methods and properties which are required to be defined when a new contextualization is defined. Aside from those properties and methods, how the data is processed and updated is up to you. There are a few things within the pipeline that need to be formatted in specific ways. We are attempting to build either helper functions or objects to stream line and simplify the development of custom tools wherever such data is needed.
The project is the final piece of the puzzle for developing custom tools. This is the interface for aggregating all job information, as well as setting project level variables for submission to the GM platform. The object is a singleton, so only one copy of the object can exist within a scene. This allows you to have multiple tools active in a scene adding jobs to a project for submission. Once all tools have added their jobs, the submission can be processed.
Currently, the Project object has no method for unloading/deleting on scene change, so loading a new scene after a project is created will cause erroneous information to be propagated. It is recommended that you close Houdini and load a new instance rather than using File > Load. Otherwise, you would have to make sure your tools delete the Project singleton or clean up any lingering data from the previous file load before attempting to work with it. At this point it should be as simple as replacing project.jobs with an empty list, but other ghosts may remain as development continues and the project interface becomes more complex.
GridMarkets Pipeline Objects
The core interfaces for the v2.0 pipeline are all housed within the utility module. The ones you need to concern yourself with are utility.node, utility.job, and utility.project. Also housed in this module are the utility.func_library, utility.objects, and utility.environment modules. The first you will definitely be using, the second you might use, but the last may come up in your development. There are properties on utility.node and utility.job already provided for accessing the environment module. The environment is also a singleton.
When inspecting pipeline objects, the following conventions are held across the whole pipeline.
- Any method which starts with a _ is one of two things:
- A contextualization function which is utilized by a method of the same name without the _
- These will always be abstract methods
- A pipeline layer helper function
- These may be defined either within the gm_node object or in the contextualization. When interrogating contextualizations of the ABC
- Any method which does not start with a _ is intended as a primary interface function
- When starting to develop new tools, begin with these functions to see if they cover your needs.
- The access to helper functions is intended for debugging what is going on inside the primary interfaces as well as giving a means to modify them by overloading in your custom tool.
- Properties follow the same general logic as methods
- Properties which begin with _ are intended as properties which are utilized internally to the layer to which they belong
- Properties without the _ are intended as primary interface or information properties. This document will attempt to elucidate the ones that are immediately useful for development.
utility.node.gm_node
The gm_node is an ABC. When creating a new tool from scratch, inherit this class to access the interface. There are currently one abstract method and one abstract property which must be included on any new node definition to be valid.
Init Arguments
- hou.OPNode
- Source node for the data being submitted, typically the output node for whatever network is being submitted
- ex. The LOPs Submitter uses the Render Settings node as this argument. The sops.SOPNode.SOPNode object takes any SOP node.
Abstract Methods
- _parseFileTree()
- This is the contextualization function for parsing the files which will need to be uploaded with respect to this node. For example, in the LOPs contextualization, this node performs the functions for parsing the USD stage to find all file references. In SOPs it parses the upstream network for any nodes which are able to import files and captures the patterns which are importing files from disk.
- The return is processed further by gm_node.parseFileTree() to complete the collation and structuring of the file data for submission, so the return structure is required.
- Must return: tuple(set(), dict())
Abstract Properties
- warnings
- This property is used by the preflight to issue scene warnings before submission. The structure of the warnings will be their own document, so for now just define/return an empty list
- Return: list()
Notable Methods, Properties, and Subobjects
The only class method to be aware of for gm_node is parseFileTree(). This is the workhorse method for figuring out what files are to be uploaded. The contextualization function for this is _parseFileTree(), which was covered above.
The helper function _parseSops() has been deprecated in favor of sops.SOPNode.parse().
The _render_aovs helper property is a generalization which looks for a function on the node object of the format _{generalTypeReturn}_aovs. It is currently only implemented for Redshift via the _redshift_aovs property. This property will return a list of utility.object.WatchFile objects. If your tool is dealing with renders coming out with multiple files that are being returned, you will need to create one of these functions. Note, the generalTypeReturn comes from constants.NODETYPES.general_type() function. The link to documentation pertaining to editing this was provided in the introduction.
The gm_node also contains interfaces which automate the collation and collection of pertinent ancestor and child nodes for Houdini networks. They are accessible via gm_node.ancestors and gm_node.children. These subobjects only have partially procedural implementations, but are modifiable and editable if the need arises. Many of the processes in the node interface rely on these subojects for informing on what is being interrogated, so may be useful to poke at on your own to see how they work and what can be done to extend them with your own code if something you need is missing.
utility.job.job
The job object is an ABC. When creating a new tool from scratch, inherit this class to access the interface. There are currently three abstract methods and one abstract property which must be included on any new node definition to be valid.
Init Arguments
- Custom object inheriting from utility.node.gm_node
- hou.OPNode
- This is the custom submission node you are creating where the code is called.
Abstract Methods
- _jobDefinition()
- This method sets a variety of properties which will be used to define the job for passing along to GM via Envoy. A detailed document will be compiled for all of the job definition properties, but for the moment these are the absolutely necessary properties along with their value types
- output_format : str(file_ext)
- frames : str("start end skip")
- output_height : int()
- output_width : int()
- operation : "render" OR "batch"
- rop_nodepath : str(path/to/node)
- The "rop" in this is a legacy holdover from Render Submit, just put the path to whatever node you used for defining the gm_node object.
- rop_nodetype : str()
- This can be any lowercase string, but should be the renderer name if available, else use the context from which you are submitting
- app : str("hou")
- Return : None
- _serialize()
- Currently unutilized, but will be in the future
- Return : dict()
- _setWatchFiles()
- This function creates WatchFile objects and adds them to the list held by self.watch_files
- WatchFile objects are used by the pipeline to define files which will be downloaded from our servers after a job completes.
- Please see the utility.objects.WatchFile docs below for more details on the proper creation of these objects
- Return : None
Abstract Properties
- _upload_files
- Deprecated in favor of the _file_tree property (which will be an abstract method in the next release of the pipeline)
- Just create the property and set it to an empty set() object
- Return : set()
Notable Methods and Properties
As mentioned above, the _file_tree property is what is adding job level files for upload. This would include things like OCIO files, USD files to render with husk, etc. The return from _file_tree should be list( tuple( hou.Parm|None, str(filepath) ) ). If your tool will be uploading any files that are defined within the submitter HDA you are creating, this property will need to be created to handle that.
Following the above, the file_tree property is an interface which combines the gm_node.file_tree with any job level additions which need to be made. Ultimately, this is the data that the preflight uses to know what files will need uploaded. It is not advised to edit the dictionary created by this. To add things to it, please use the job._file_tree overload. This can be called to inspect the data coming from lower layers and ensure that they are yielding what you are expecting.
The setJobEnv(dict()) method will be used if you need to set environment variables for your job. These variables may be defining custom $VARIABLE values you have in your scene. Notably, OCIO on GM will likely need this to be done. This is the scheme for the dict argument:
- {"variableName1" : variable value 1,
- "variableName2" : variable value 2,}
utility.project.project
This is a singleton object which persists between scene loads. Generally speaking there are only a handful of properties and methods which are pertinent to development.
- add_job(utility.job.job|[utility.job.job])
- Adds jobs to the project.jobs list. Can be passed a single job object or a list of job objects.
- preflight()
- This is the final submission call. Once everything is added and set up, call this to continue the submission process.
- .jobs
- The list of jobs associated with the project. Currently able to be edited, though future development may close this functionality. Please use add_job() to insert new jobs to the list to ensure that future development does not break tools.
Helper Functions
utility.func_library
Many functions may be useful for custom tools, a handful of them will be invaluable. I am going to section these by task/use case for easier reference.
- Editing the gm_node.file_tree
- leafFromParm(hou.Parm, str() [optional]) and leafFromNode(hou.OPNode, str() [optional]):
- These two functions will take the parm or node for which a file or directory is being added to the tree and properly format it for use with dictByPath to build the correct structure. Generally speaking, using leafFromParm will be preferred.
- dictByPath(list(), Any):
- This will convert the path containing list into a nested dictionary with each key being a layer in the path. The second argument will be set as the value of the deepest dictionary.
- update(dict(), dict()):
- This will parse over both dictionaries to find the point of divergence, then update the dictionary in the first argument with the new data from the second.
- The first dict is updated by reference, so no return is made.
- Collect a set of files on disk for a given string/parm.
- expandHoudiniVariables(unexpanded_path:str(), regex:constants.RE.*, parm: hou.Parm [optional], replacementCharacter: str [optional], force_pattern: bool [optional]:
- This is a workhorse function which takes in a unexpanded string value, usually obtained with hou.Parm.rawValue(), and expands most channel references, houdini variables, udim variables, backtick evaluations, and other methods available to artists into a usable list of files.
- Replacement is used to substitute whatever the regex argument finds in the final path.
- Function returns a set of file paths on disk which match the expansion derived from unexpanded_path or, if no files are found or the force_pattern argument is True, a set containing the file and directory expansion with the replacement from the provided regex made.
- filecacheNodes, parmsThisReferences, collapseFromVariable, _expandVariables, _expandBackticks, _resolveTurePathForBackticks, and _replaceUDIM are all helper functions for expandHoudiniVariables(). If issues arise within the execution of this function, use these to check what how things are being processed. Please make a ticket with GM Support to get more details on arguments and ordering for how these helper functions are used.
utility.objects
- WatchFile
- This object defines what files should be monitored on the servers to be returned.
- Generally they are created by the already existing interfaces, but if you are defining custom files on your submission node, your code will need to handle them for those files.
- Arguments:
- output_pattern: A string which contains the path to the file appended with "/.+"
- download_path: A string which contains the full file path to the expected files. Replace any frame variables with the * char.
- utility.func_library.expandHoudiniVariables(rawString, constants.RE.perFrameSpec, replacement="*", force_pattern=True) will return the string with the proper expansion and formatting done.
- parm: hou.Parm|str from which the watch file is derived. Ideally, this will be a hou.Parm, but any str should be a path to a parm or node in the scene.
- This is used by the preflight for linking watch files back to where they originated in the scene.
Tutorial
All code is assumed to be in the Python Module. If you have not modified your install to utilize a package file, please copy the General GM code from the LOPs submitter to the bottom of your module to handle making the python libraries available.
This code is a rough minimal example of a fully custom submission tool for GM. Any parm references are made up, the WatchFile in customJob._setWatchFiles() is not defined properly, etc. Anything that is not specifically code is most likely a joke at the end of this long doc.
- from utility.node import gm_node
- from utility.job import job
- from utility.project import project
- from utility.objects import WatchFile
- from utility.func_library import leafFromParm, dictByPath, update
- class customNode(gm_node):
- def __init__(self, node:hou.OPNode):
- super().__init__(node)
- self.warnings = []
- def _parseFileTree(self):
- fileSet, fileTree = self.magic_function()
- return (fileSet, fileTree)
- def magic_function(self):
- leaf = leafFromParm(self.node.parm("something"))
- dictPath = dictByPath(leaf, self.node.parm("something").evalAsString())
- return (set(["magically", "filled", "list", "of", "files"]), dictPath)
- class customJob(job):
- def __init__(self, source_node: customNode, submitter_node: hou.OPNode):
- super().__init__(source_node, submitter_node)
- self._upload_files = set()
- def _jobDefinition(self):
- self.output_format = self.source_node.node.parm("something").evalAsString()[:-4] #Yes, I am just grabbing the last 4 characters of the string parm for the extension
- self.frames = " ".join(map(hou.Parm.evalAsInt, [self.submitter_node.parm("f1"), self.submitter_node.parm("f2"), self.submitter_node.parm("f3")]))
- self.output_height = self.submitter_node.parm("res1").evalAsInt()
- self.output_width = self.submitter_node.parm("res2").evalAsInt()
- self.operation = "render"
- self.rop_nodepath = self.source_node.node.path()
- self.rop_nodetype = "custom"
- def _serialize(self):
- return dict()
-
- def _setWatchFiles(self):
- newWatchFile = WatchFile("Properly", "define this", self.submitter_node.parm("please"))
- self.watch_files = [*self.watch_files, newWatchFile]
-
- @property
- def _file_tree(self):
- return (None, "")
- def customSubmissionFunction(kwargs):
- node = kwargs['node']
- target = node.input(0)
- gmNode = customNode(target)
- gmJob = customJob(gmNode, node)
- gmProject = project(hou.text.expandString("$HIP"), node.parm("project_name").evalAsString(), node.parm("submission_name").evalAsString(), hou.text.expandString("$HIP/render"))
- gmProject.add_job(gmJob)
- gmProject.preflight()
That should just about get you there. Then you would use the customSubmissionFunction as a callback function on a button on your tool and everything should "work". Obviously things like the WatchFile would need to be properly defined and the correct parms would need to be targeted, but hopefully that example gets you rolling.
Good luck and we hope you find the new pipeline easy to customize and use when integrating GridMarkets into your pipeline.