Sunday 25 September 2011

Mouth controller on Source models- Half Life 1 style

INTRODUCTION
In almost all source engine games, a lip sync or sound to motion sync system in the nature of Half-Life 1 exists. This primarily allows simple models to be created with the ability to lip sync any given WAV file.
The interesting thing about this system is the design that enables more than just mouth movement to fit under this controller. For the mouth parameter of a qc in a model, a series of blend animations are required rather than the specification of an individual bone. This allows for a wide variety of motion sync to sound volume. Also, when using this system, no facial animation or phoneme editing is required, any WAV file can be synced instantly.
METHOD
To  use the Half-Life 1 Mouth system on new models, you will obviously require tools to create the models. (I think I have a few links to trial and free resources in my other posts).
A model using the Half-Life Mouth system must initially have a mesh and the ability to animate. The final animation is usually done with an NPC entity, so rigging and bone structures would be beneficial.
First you require a reference .SMD for your model, this is obvious to create the model and will be used as an animation layers comparison for animating the mouth parameter.
Then two or more sequence .SMDs are required as components of the blend sequence animation, one being the animation played when there is no sound passing through or being spoken by the model and the other being when the model has sound passing or being spoken through it.
CLASSIC HL1 MOUTH PROCEDURE
While the animation of a mouth is not the maximum potential of this parameter, the requirements to create a mouth animation for any custom model are:
A bone or bones skinned to mouth components (or mesh that can be animated and compiled correctly)
A reference .SMD file for the model
A mouth open sequence, usually two frames, only have animation on the mouth components, the rest of the body should remain in its reference position so the layer remover can have something to compare to.
A mouth close sequence, two frames, same as above.
These should all naturally be exported to one work folder, where the QC file to compile your model with the new mouth parameter should be. (see below)
QC MOUTH PARAMETER LAYOUT
The mouth system is basically a blend sequence with a tag Source recognises as a set of animations to raise and lower depending on the volume of sound passing through the model ingame.
A basic layout for a mouth controller in a QC file is as shown:

$animation Referencefile.smd "Referencefile.smd" fps 30
$animation lowsoundsequence "lowsoundsequence" fps 30.000000 subtract Referencefile.smd 0

$animation highsoundsequence "highsoundsequence" fps 30.000000 subtract Referencefile.smd 0

$sequence model_mouth "lowsoundsequence" fps 30.00 {
  blendwidth 2
  blend mouth 0.000000 90.000000
  delta
  autoplay
 highsoundsequence
}

LINE 1
The first line creates an animation to compare the 'open' and 'close' sequences to so the rest of the body can remain in a blank layer that can be filled by other animations, as seen in Half-Life: Source. The Referencefile.smd will be the file name of your reference SMD for your model, this is a simple way to achieve a motion comparison to a reference pose, however, if you want, you can create another reference pose .SMD to use for this parameter.

LINE 2
The second line creates the animation that will be the 'mouth close' sequence, this will usually be looped as an animation layer on the model when there is no sound playing through the model.
The lowsoundsequence will be replaced by the name of your 'mouth closed' sequence .SMD, alongside the choice of speed in framerate for the animation and the subtraction of the reference file, as mentioned in the paragraph above this. The reference file subtract must be the reference file sequence for it to subtract layers of animation correctly.

LINE 3
The third line is as above, creating a 'mouth open' animation for when there is sound passing through the model, these two animations will be blended together and move between each other gradually depending on the sound volume. Again this will use your 'mouth open' animation in place of the highsoundsequence, you can choose the fps and the reference.SMD mentioned two paragraphs above this one.

LINE 4
The fourth component brings it all together in a standard blend sequence. The most important part of this is the blend mouth 0 - 90 component, that registers this as the sequence to play for mouth movement.
Model_mouth is the optional title for this animation, call it what you like.
the following "lowsoundsequence" will be the low sound point sequence using the 'mouth closed' sequence mentioned in above paragraphs.
Then there is the play speed for the whole sequence in frames per second.
Blendwidth 2 indicates or records the number of animations to blend between, this one uses two, an open and a close animation, you could experiment by adding more animations and keeping this number equil to the number of animations included in the blend.
The Blend Mouth parameter registers the sequence as the Mouth sequence to be used to vary depending on volume, the 0.00 to 90.00 is the distance the blend has to go from completely open to completely closed, reducing or increasing this will create a larger distance.
Delta is the key parameter with no arguements that intialises the animation layers, so the mouth sequence plays over the top of other animations, rather than replacing them.
Autoplay triggers this sequence automatically when the model is created, so the mouth sequence is always active, ready to animate when sound passes through the model.
Under autoplay is the list of animations to blend between, in this case there is one, because the other is placed at the top of this section, making two. As mentioned before, you can play with adding more animations while changing Blendwidth to get different effects.

Hopefully this post helps, I'd be interested to see what you can manage with this.

Friday 19 August 2011

Garry's Mod Easy Instant Machinima Lip-Sync

INTRODUCTION:
In Garry's Mod and most other source engine games, wav files in the game directory can be loaded with lip - syncing face flex information in Source SDK's Face Poser and moved anywhere to still obtain that lip - sync information usable on any model with a valve facial animation system. This includes Team Fortress 2, Garry's Mod, Half Life 2 Deathmatch and those other Source variants.
Once a Wav file of the right sample rate has been loaded up in Face Poser, lip synced with the Phoneme Editor tool and saved, it is ready to be implemented into Choreography and the following special client side operation native to Source:
TECHNIQUE:
With a Wav file that has been processed with the Face Poser Phoneme Editor anywhere in the Sound folder of the Source game you wish to use, the play -sound directory here- command can be used to play the sound through the client. When bearing a Valve facial animation player model, it will proceed to lip sync the sound through the player model. This is only a client side feature so far, it will not play to other members of a multiplayer server. It is however a feature that can be used while in a multiplayer server. A method of third person is by default required to use this feature to view the player model, however as seen in Garry's Mod, the sound is projected through the entity bearing the clients view (prediction), this would make it open for manipulation perhaps to use the play command through other entities and have them do the lip sync.
EXAMPLE:
A wav file is loaded into sounds/syncing and played with the Play command as seen as it would be written in the console:
play syncing/cn_die1

The given sound will play through the entity bearing the player, thus lip syncing the attached model if there is one with the lip sync data in the wav file.
Here is a short video (the one (presumed to be) disliked by ExtremelyFatAss (or his tribe of 23 year olds(or someone japanese))) of the procedure in action.

Thursday 18 August 2011

Editing Textures and Sound while Ingame Source

METHOD
Ingame in most Source engine games, features exist to refresh the game sounds and textures without reloading a level, When replacing sounds and textures using a combination of software including the most accesible GCFscape, VTFedit, Audacity and Wavosaur, modifications can be viewed immediately using the console commands snd_restart and mat_reloadallmaterials. Features for refreshing other content such as Models and Programming are perhaps covered by the commands flush and flush_unlocked (correct me here if I'm wrong). Using these things alongside the console commands soundlist to get a list of cached sounds and most useful surfaceprop to get the material and model files under the crosshair, (properties of surface and entity you are looking at) worlds and sound schemes can be dynamically re-built and re-painted.
EXAMPLE


Original Sign as seen in Half-Life: Source,
console command entered while looking at the sign:
surfaceprop

Console came back with:
Hit surface "metal" (entity worldspawn, model "maps/c1a1.bsp" Opaque. ), texture "HALFLIFE/SIGN9"

Proceeded to change the material in the directory Materials/HALFLIFE/SIGN9
and then used command:
mat_reloadallmaterials

Final product with material made of first paragraph of this post:
The same process is done with sounds using the snd_restart and soundlist commands or SourceSDK or browsing GCFscape for sounds (and/or materials). Have fun with this, here's the product of some time I spend in Half-Life: Source with this feature:

Tuesday 16 August 2011

One simple modelling technique in 3ds max, Basic Post






More can be found elsewhere on details like rigging and exporting.

Special Source Engine Spray Paint Methods for Advanced People

INTRODUCTION
Using the Source SDK or the GCF scape, a special trick is achievable to place the most fitting logo files in a multiplayer or singleplayer Source game.
The console command in source to place a Logo of your choice on a single or multiple session (usually multiple, but you'd be surprised to know some single games have it too) is Impulse 201.
This is a totally legal command and requires no cheats or anything.
It basically produces the referenced file under the command cl_logofile "logo dir" as a decal on the environment, as long as the decal can be found and has the right properties. Spray logos deal with animated VTFs only, theres no vmt business so far as I know.
Valve was going to make a little Jingle thing where you could play a server side sound to everyone that you had chosen, but this dissapeared. The commands are still there but it's useless.
METHOD
using the command cl_logofile with the directory of a vtf file of your choice, really high quality textures from any location can be used. For instance, in Team Fortress 2, by checking the cache files or opening a texture browser in hammer, the name of a texture can be recieved and entered as: materials/examplefolderinmaterials/followingfolders/material.vtf This technique includes the directories inside GCF files. I have applied this in a few places, for one I got the original Toolgun icon from the hud menu and used it as a spray, I have also tested a variety of Team Fortress 2 ingame textures, using tf2 textures theoretically will also increase texture visibility and download/recieve time for other clients because of the game content fact (prove me wrong)
EXAMPLES
cl_logofile materials\Effects\duel_blue.vtf
the command cl_logofile with the arguement materials folder (beginning to look in the GCF (game cache)) Inside the effects folder, duel_blue.vtf is the file we use. This gives us a nice blue crossed guns logo when we input impulse 201 or as its naturally bound to T in games like Team Fortress 2 and HL2 Deathmatch
CACHING
This is the fun bit, as of 2011, every spray anyone pastes into an environment in Source is saved in a Temp
folder in the materials folder of every standard client of the server- I'm sure in with the VTFedit tool and the cl_logofile, you will find some inventive things to do with these awesome features.