Matt Pharr


matt.pharr@gmail.com

I'm a research scientist at NVIDIA Research, where I work on real-time ray tracing and machine learning for rendering.

I was previously in Google Brain, and before that, I led the VR light field capture project in the Daydream group; my team also built Seurat. Previously, I worked on computational photography in Google[x].

Before Google, I founded Neoptica (acquired by Intel) and Exluna (acquired by NVIDIA). During those years I worked on both offline and real-time rendering and also spent a fair amount of time developing programming models and compilers for various “interesting” architectures (GPUs, heterogeneous CPU+GPU systems, and then CPU SIMD units.)

My book on rendering, Physically Based Rendering, is widely used in university courses and by graphics researchers and developers. Greg Humphreys, Pat Hanrahan, and I were awarded an Academy Award in recognition of the book's impact on CGI in movies—never before has a book received an Academy Award. The third edition of the book was released in Fall of 2016.

I have a Ph. D. in Computer Science from the Stanford Graphics Lab and a B. S. in Computer Science from Yale.

Selected Projects


Physically Based Rendering

Wenzel Jakob, Greg Humphreys and I wrote a textbook on rendering, Physically Based Rendering: From Theory to Implementation (book's website). The book has been used as the primary textbook in more than seventy advanced rendering courses at over twenty universities. The accompanying software has been used in over seventy peer-reviewed research papers. Greg, Pat Hanrahan, and I were awarded an Academy Award by the Academy of Motion Picture Arts and Sciences for this work; this is the first book that has ever received this award. (Check out Kristen Bell and Michael B. Jordan on the book's merits.)

Light fields for VR

In 2014, I started the light field project in Google's Daydream group, assembling a fantastic team that embarked on a multi-year effort to solve many hard problems in capture, processing, compression, and real-time display of light fields for VR. (Light fields provide the best known way of capturing real-world environments for VR, giving both accurate stereo and translational parallax within the viewing volume.) Some of what we built has just been released in a demo app that is now freely available on Steam.

Seurat

As a spin-off from the light field project, we also developed technology to make it possible to view film-quality computer-generated imagery in VR, taking multiple images of a scene generated by a high-quality offline renderer and transforming them into a form that could be accurately rendered with full 6 DoF head motion, even on VR HMDs driven by low-powered mobile GPUs. We worked with ILM to first show this technology to the world with some gorgeous Rogue One content, first showing Seurat at Google I/O in 2017. The game Blade Runner: Revelations made extensive use of the technology. More recently, Our paper on Seurat won the prize as the third best paper at HPG 2018.

ispc

A substantial fraction of available performance in modern CPUs and GPUs comes from SIMD hardware. Programming models for GPUs make the architectures' SIMD-ness mostly transparent to programmers thanks to their adoption of the “single program multiple data” (SPMD) programming model, though this approach hadn't been used for SIMD on CPUs. I wrote a compiler for a C-based language that makes it easy to write SPMD programs for the CPU; it was released as ispc, which is now open-source and on github. I wrote a paper about the system with Bill Mark, ispc: A SPMD Compiler for High-Performance CPU Programming, (InPar 2012, Best Paper Award). I also gave a talk about ispc in the Illinois-Intel Parallelism Center Distinguished Speaker Series (UIUC), March 15, 2012 (talk video). See also Pixar's tech note that encourages the use of ispc in place of the RenderMan shading language.

Intel Advanced Rendering Technologies Group

After the Neoptica acquisition, I was the technical lead of the Advanced Rendering Technologies Group; we were working on a number of projects focused on building software that made it possible for graphics developers to make the most of Larrabee's unique capabilities—this included both a compiler for a new shading language and an extended (and extensible) software rasterization pipeline. Throughout this work, I gave numerous public talks and met with many graphics developers to discuss Larrabee. At Intel, I also led technical due diligence for a number of Intel's graphics acquisitions (both considered and executed).

Neoptica

I was a founder and the CEO of Neoptica, which worked on new programming models for graphics on heterogeneous CPU+GPU computer systems. After a first round of funding and growing to 8 people, Neoptica was acquired by Intel in the Fall of 2007. I gave a keynote at ACM/Eurographics Graphics Hardware 2006 that got some attention; it outlined some of the context behind our goals at Neoptica. Related, my talk The Quiet Revolution in Interactive Rendering at the Stanford EE Computer Systems Colloquium in November 2005 discussed some of the trends in graphics that influenced Neoptica's work.

Stanford cs348b

I recently had a great time teaching the 2014 and 2013 installments of cs348b, the graduate-level rendering course at Stanford. I was also fortunate to have the opportunity to teach the 2003 class. The excellent results from the rendering competitions at the end of the course are online: (2003) (2013) (2014).

GPU Gems 2

While at NVIDIA, I edited the book GPU Gems 2: Programming Techniques for High-Performance Graphics and General Purpose Computation. The first half of the book is comprised of twenty-four chapters about the state-of-the-art in interactive rendering, and the second half is devoted to general purpose computation on graphics processors (GPGPU)—the first book covering this topic.

Exluna

Craig Kolb, Larry Gritz, and I co-founded Exluna in 2000; we saw that forthcoming programmable GPUs would make new interactive content creation tools possible and started Exluna to pursue this area. Our first product, Entropy, was an offline renderer; it was used in a number of movies—most notably by ILM for an exploding spaceship sequence in Attack of the Clones. NVIDIA acquired Exluna in 2002.

Scattering Equations for Light Transport

My thesis, Monte Carlo Solution of Scattering Equations for Computer Graphics, was about a new theoretical framework for rendering centered on scattering rather than light transport as the basic abstraction. One of the contributions was a rigorous formulation of scattering from layered surfaces. Pat Hanrahan was my advisor.

Memory-Coherent Ray Tracing

In my first few years of graduate school, I worked on algorithms for ray tracing scenes that were too complex to fit into memory; this led to two main papers, Geometry Caching for Ray-Tracing Displacement Maps (Eurographics Workshop on Rendering), and Rendering Complex Scenes with Memory-Coherent Ray Tracing (SIGGRAPH). Though this work focused on out-of-core rendering, the core concepts it introduced—having many active rays and selecting rays based on which parts of the scene they will access—have been at the foundation of subsequent hardware ray-tracing architectures.

Pixar Rendering R&D

I worked in the Rendering R&D group at Pixar during graduate school; my main contributions were significant improvements to occlusion culling in RenderMan as well as rewriting all of the code for NURBS and parametric patches and curves to improve numerical robustness and accuracy of dicing rates. (The dicing rate improvements also significantly improved performance by reducing excessive shading calculations due to over-tessellation). For this work, I have movie credits for A Bug's Life and Toy Story 2. (Using a very loose definition of the Bacon number, this means that I have an Erdős–Bacon number of 6.)

Various Other Things


TOG Special Issue on Production Renderers

I recently guest-edited a special issue of ACM Transactions on Graphics on production rendering. The developers of five widely-used renderers wrote comprehensive systems papers, describing the challenges they face, the constraints they work under, and the solutions they've developed. (RenderMan, Manuka, Sony Arnold, Solid Angle Arnold, Hyperion). I wrote a short introduction that discusses the transition from Reyes to path tracing.

CACM Technical Perspective

The Communications of the ACM published a paper about NVIDIA's nifty OptiX system for high-performance ray tracing on GPUs. I wrote a technical perspective, The Ray-Tracing Engine That Could, that introduced the article and helped frame the work's achievements for a non-graphics audience.

GPU Computational Finance

In the stone ages of programmable GPUs (2004), I realized that computational finance was a good fit for GPU computing given a target market that was interested in high performance and given the arithmetic intensity of many of the computations. I wrote a chapter with Craig Kolb about Options Pricing on the GPU for GPU Gems 2. For better or for worse, GPU options pricing continues to be a poster-child for GPU computing.

skicka

I wrote a command-line tool that makes it easy to work with files and directories on Google Drive (including uploading/downloading, listing files in folders, etc). Google was happy to let me open source it; it is now available on github. (“skicka” is Swedish for “to send”, which vaguely alludes to what the tool does).

Talks

I've given some talks. Most recently, I gave a keynote at i3d 2017 (see also some notes about the talk's content; the slides aren't really digestable on their own). There have been a number of others; they will arrive here eventually.

Committees, etc.

I've served as Conference Chair (2010), Program Chair (2014), and Papers Chair (2009) for ACM/Eurographics High Performance Graphics, one of my favorite conferences. I'm on the editorial board of The Journal of Computer Graphics Techniques.