The IGA campain features anywhere from 3 to 16 characters per spot. All these CG actors need to drop by the virtual hair salon before they are allowed on set. Here’s what happened to Oceane Rabais and Bella Marinada at this stage.
1-We always start with the character design made here at SHED as a reference.
2 – We then look up on the internet for a real life reference of what the hairdo could look like. This is only as a reference to capture certain real life details. Since we are going for a Cartoonish look, we are not aiming at reproducing the reference exactly. Of course a picture of a duckface girl is always a plus. 2 -然后我們?cè)诨ヂ?lián)網(wǎng)上查找現(xiàn)實(shí)生活中的參考。這只是制作為了作為一個(gè)參考某些真實(shí)的生活的細(xì)節(jié)。因?yàn)槭强ㄍń巧砸膊荒芡耆瞻幔?dāng)然女孩首先是要苗條。
3 – We proceed to create an emitter fitted to the head from which we emit guide strands with Ice. They get their shape from nurbs surfaces. Those guides are low in number (from 200 to 400), so it’s easy to work with them to groom and later simulate and cache on disk. The idea is to get the shape of the hairstyle and the length. The bright colors are there to help see what’s going on.
4 – Next, we clone theses strands, add an offset to their position and apply a few Ice nodes to further the styling. These nodes generally include randomizing and clumping amongst others. We now have around 90 000 strands and it can go up to 200 000. 4,接下來,我們克隆這些引導(dǎo),添加一些位置的調(diào)整和ice節(jié)點(diǎn)來體現(xiàn)發(fā)型效果。這些節(jié)點(diǎn)一般包括隨機(jī)化和阻尼效果。我們現(xiàn)在有大約90 000股引導(dǎo)和它能上升到200000。
5 – Then we repeat the process with the eyelashes and the eyebrows. During the whole process the look is tweaked in a fast rendering scene.
6 – Once happy with the results, we copy the point clouds and emitters to the “render model” where the point clouds will be awaiting an Icecache for the corresponding shot. We use Alembic to transfer animation from rig to render model and the Ice emitters . 6 -一旦結(jié)果滿意,我們復(fù)制點(diǎn)云和發(fā)射器的“渲染模式”,點(diǎn)云會(huì)等待一個(gè)Icec緩存。我們用Alembic傳輸動(dòng)畫到ice發(fā)射器。
7 – Back to the Hair model we convert the guides strands to mesh geometries. We apply syflex cloth simulation operators to these geometries to get ready for shot simulation. We link the guide strands to the syflex mesh so they inherit the simulation. 7 -我們把頭發(fā)引導(dǎo)轉(zhuǎn)化成模型。再使用syflex布模擬頭發(fā)動(dòng)力學(xué)。
8 – Next comes shot by shot simulation and Ice caching of the guides strands (hair, lashes, eyebrows and beard if necessary). 8 -接下來模擬和緩存ICE的引導(dǎo)線(頭發(fā)、睫毛、眉毛和胡子如果必要)。
9 – Before we pass down the simulation caches to the rendering department, we need to do a test render to be sure every frame works and there is no glitch/pop. With final beauty renderings taking sometimes close to 2 hours per frame, it is not a good thing to have to re-render a shot because a hair strand is out of place ! The scene we use renders quickly with no complex shaders and only direct lighting. 9 -在緩存結(jié)算渲染之前,我們會(huì)做些測(cè)試渲染來保證所有東西的沒問題。以最后的美麗圖片以接近2小時(shí)每幀的速度渲染,如果因?yàn)轭^發(fā)穿插而重新渲染就太糟糕了,下面就是測(cè)試渲染。
10 – Once we are happy with the look of the hair, the movement of the simulation AND most of all once we’ve resolved all the problems, we give the signal to the rendering department. The hair PointClouds are always automatically linked to the appropriate simulation cache for the current shot so all they have to do is “unhide” the corresponding object in their scene and voila ! 10 -一旦我們滿意,就開始渲染一個(gè)單幀來看最終效果。頭發(fā)的PointClouds結(jié)算緩存會(huì)自動(dòng)連接到緩存上,最后即渲染啦不拉不拉。