Fixing and 99% improvement in voxel replay

Catch-up: For campaign mode, SnwScf needs bigger arenas/levels.  This means (1) much wider voxel snow coverage (e.g. across mountains) which mandates LOD levels (level of detail) and (2) removing/replacing snow as the player(s) move.  All in all, lots of optimizations, etc.  Some of the funnest work 😀

This one’s all about fixing then optimizing the replay of actions (dig, build, etc).  This is used when voxels are (a) tidied due to being further away than we wish to keep or (b) when we switch their LOD levels = a lot more common!

At the start, it didn’t work.  This was mostly due to other work I’d done on the codebase while not worrying about this functionality since I hadn’t been using it.  With small arenas I never let the snow be tidied — I just kept it all!  Obviously this meant larger load on CPU, Memory and GPU!  In past posts (12 & 3) I talked about improving with LOD, etc.  Now the voxels can be swapped-out to lower LOD levels so we definitely need it!

Here’s out test patch of snow.  To keep things consistent, I saved the snow so I can reload it every time.  This is a slight over-complication — it actually stores all of the generation settings and simply loads them then does generation and gets the same result every time!

 

screenshot-2016-09-16-10-47-22

Here are the numbers:

Total time (ms) Percentage improvement Num replay calls Num DoOperation() calls Comments
(didn’t work) Original (didn’t work)
4552 0% 486 31104 Original (fixed)
752 -83% 78 4992 (1) Switched to Chunk-per-call (rather than whole height each time)
(not-timed) 6 4144 (2) Optimized what was considered actually modifying a ChunkData.
53 -99% 6 1120 (3) Bounded start & end bounds by original sizes
8 Blocks to a side in a Chunk
8 Height of this operation (Chunks)
-3 MinY
1 MaxY
4 World height (Chunks)

And some comments on what I did:

  1. The original code found all ActionData instances that had affected a Chunk then re-ran it for every column… but it didn’t bound the height of the column so it was affecting each Chunk as many times as the number of vertical Chunks it had affected!! (e.g. ActionData touched 3 Chunks high, all 3 Chunks would be done 3 times!?)  This seems a bug in the original code.
  2. The original ActionData code considered a Chunk touched if it was within certain bounds even if it didn’t change the isovalue.  This might have been for painting?  Even that guess is a bit of a stretch — I can’t really see any reason for it so I ‘fixed’ it.  I don’t use it anyway so meh.  It’s easy enough to revert if I offer these changes back to the die-hard users of TerraVol (if there are any).
  3. So previously we’d operated on *every* Block within the Chunk.  For the edge ones, that was superfluous.  Instead, I bounded the Blocks affected by the original size affected.  Great improvement — albeit most useful on edge Chunks which this test-case has lots of.  A larger operation will be more inner ones but I have ideas for that!
    (care to guess along at home … or write in on a stamped self-addressed envelope 😉 )

Here are a couple of pictures of the operation.  The white boxes are Chunk boundaries and the purple boxes are the reduced bounds that are now operated upon 🙂  If you’re wondering why the purple boxes extend outside the green capsule, it’s because the isovalues are smoothed over that range from no-effect to full-effect to produce a smooth result at about the threshold where the green capsule is drawn.

screenshot-2016-09-16-10-44-54

screenshot-2016-09-16-10-45-37

53ms is actually still a long time and needs reducing / parallelizing but it’s a lot better.  To give you an idea — this is a tiny operation.  Most are much larger.  This one’s equivalent to rolling a small snowman 1 meter.  A carrocket hit at smallest level would be about 100 times larger.

Now this is all done off the main thread so it’s not so bad however it’s all done on a Generation thread (which has responsibility for getting the data ready before the Builder threads turn that into a mesh).  Sadly, the way things are structured at the moment, the Builder can time-out if the Generation takes too long and won’t notice a change until the camera moves sufficiently far.  Also generally I’ve tended to only need 1 or 2 Generation threads whereas what we’re really doing here is ActionData application — which I have tended to have 4 threads for since it’s done a lot!  It would feel a lot cleaner to move this re-application of ActionData to the ActionDataThread then get the BuilderThread to be informed when the Chunks are ready for meshing.  So that’s next!

Onwards and faster-wards 😉

Floating polar ice, part 2

Since last time on floating polar ice environment, I’ve fixed all those “obvious things” and improved the ice blocks!

Details below on Blender work, UBER, pushing and long-jumping.  First, here’s the video!

 

Spent much of Saturday learning 2 things for my ice blocks (which will hopefully be useful in future modeling jobs, eh-hum!)

Blender

Firstly, modeling the ice block with Blender (with links to tutorials, etc used — some of which I’d done before but it’s nice to have a simple list of references, eh!):

  1. Clone original object (in case wish to start again)
  2. Fracture to create detailed object
  3. Duplicate and ensure it has unique data blocks,
  4. Bake AO map, normal map, etc.
  5. (after playing with UBER below, realize I’d prefer to hand-paint the translucency map so…)
  6. Texture paint translucency map (I painted to black and white though this needs moving to either the Albedo’s Alpha channel or the AO map’s green channel for UBER).

UBER

The second thing that took most of Saturday was fiddling with UBER shader‘s (yes, it’s name is all capitals) refractive and translucency capabilities.  I’d previously gotten reasonable results for the PlayerSetupArena’s ice walls but, on reflection (see what I did there … er, ‘cuz I spent ages fiddling with all those optics parameters, oh never mind 😉 ).  Yes, on reflection, I realized I’d used an amount of emission to get the blue-ish hue from the ice.  While that worked OK in a static environment with no time-of-day changes, it ruins the PBS capability for nighttime.  As such, I think I need a different way.  For the PlayerSetupArena I’m toying with adding some point lights behind the ice walls that slowly rotate around the arena.  I’ve not done it yet but will post a (hopefully very pretty pic) once done.

However for the ice-blocks floating on the ocean, I eventually concluded that UBER has difficulty with Realistic Water‘s shader.  I suspect (after some invetigating) that both UBER and RW do a grab pass to get information about what is behind them to do their refractive effects.  This likely needs unifying to get them to work together.  For now I’m resetting the floating ice blocks back to Unity Standard shader since the effect is still very pleasant.

Madness, Ceto and Beautify

Lastly (on the aesthetics front), to tie-off my comments last time about the recent Unity Asset Store Madness sale, I also tweaked the RW ocean shader to give some ripple highlights using its Fresnel capability.  They aren’t everything I’d envisaged but might actually be better.  Let me know your thoughts!  RW doesn’t do volumetric subsurface scattering which I’d really like but neither does Ceto — the water shader in the sale.  As such, I decided not to buy that and save for PlayWay if I ever need more (which I think is the most impressive one).  Instead, at the recommendation of a few dev friends, I picked-up Beautify.  It is another post-effect and sharpens up the image.  I suspect I’ll need to combine it with Scion to save passes later but for now, it’s added a lovely ‘pop’ to my aesthetic so good suggestion, guys!

Gameplay

Lastly (really this time), a couple of gameplay bits!

Several people made the same comment on the last picture of the floating ice — “Jump! Jump!” and “Push him!“.  Yep, that’s all in there!  I’ve included it in the video to satisfy the schadenfreude of my audience 😉

However I also discovered a ‘feature’ of my movement controller that allows jumping much larger gaps than I’d previously believed — by rolling, releasing and double-jumping, you can long-jump!  This wasn’t something I built in intentional but is actually quite a lot of fun so I’ll be labelling that a feature.  The big question is whether it’s accessible enough for regular players to use or is it primarily for expert players to access secrets!  (your thoughts as always are most welcome 🙂 )

p.s. Are blog posts like this better?  I’m not writing them as I go so they’re twice less useful to me (I can’t use you for Rubber Duck Debugging and it takes time to write them instead of more GameDev!) but I think the overall feeling is nicer.  What do you think?  Would you prefer more dev ponderings? more conclusions? half-way? other?  TTYS!

New environment WIP: Floating polar ice

I’ve always planned that some levels would take place on polar ice, floating at sea. The latest Unity Asset Store Madness Sale includes the Ceto Water system that I’d long thought I might use for the sea. However I already have another called “Realistic Water” so thought it was time to evaluate that to decide whether to buy Ceto (or another, possibly PlayWay).

Here’s the very first integration of water into Snowman Scuffle with floating polar ice.

Obviously there are plenty of things that need fixing, e.g:

  • Functional:
    • The “replace-on-floor-if-you-slip-beneath-it” function needs disabling.
    • The character controller doesn’t handle moving platforms yet!
  • Aesthetic:
    • the bloom’s blown out.
    • the sky-box needs rotating.
    • The water surface could do with more details (e.g. specular highlights).

But ignoring these, it’s actually not a bad first step. That moving platform one is probably most tricky but, compared to how bad I feared this would be, it’s barely noticeable! Once fixed, it should be good!

How good will fighting, CTF’ing, racing, etc on blocks of floating ice feel!  (I’m kind’a hoping the answer will be “great!” 😉 ) Let me know your thoughts!

Voxel LOD generation

LOD (Level of detail) now working for voxel *generation* as well as building.

Here’s a YouTube at full res for the LOD-lovers like me out there 😉

As discussed in the last post, the generation phase is deciding on the matrix (8x8x8 for LOD0) of numbers that comprises a voxel ‘ChunkData’ whereas the building phase is turning those numbers into a ‘Chunk’ with a ‘Mesh’ — a thing you can actually see and interact with.

This new work means:

  • LOD0 (white boxes) contain 512 floats and 128 triangles = unchanged.
  • LOD0i (yellow boxes) contain 512 floats but only 2 triangles = unchanged.
  • LOD1 (cyan boxes) contain 1 float and 2 triangles = the NEW bit!

You can see how that’s much less memory and much better performance — especially for covering *much* larger arenas!

Yep, open(er) world campaign mode, here we come!

p.s. As with most gamedev or even programming, this took a great deal of frustrating tweaking including upgrading Unity and several assets then creating a whole new way to investigate details about voxel spaces that are generated on threads.  I know, implementation details.  I’m happy to discuss and the answer will involve phrases like “Marching Cubes”, “Isovalues” and lots about “neighbours”.  Ask if you’re curious / suffering from insomnia 😛

Oh and there’s still plenty of work to do before someone asks “is it done yet”.  There’s a reason I don’t show the Snowmen moving around in it yet 😉

Terrain/Gaia learning 2016/07/24

This weekend, I experimented with Gaia, RTP, DirectX11 Grass, SpeedTrees, Scion Post FX, Amplify Motion, Sonic Ether’s AORealistic Water, etc. mostly to learn about making good looking Terrain for (primarily single-player campaign in) Snowman Scuffle.

For my levels, I’m using Unity Terrain as the ground.  It’s overkill for the tiny arenas but for the potential campaign levels, I feel it ought to make most sense.  I say ought because that’s pending performance acceptability.

Gaia is one of several terrain-building assets I’ve bought (while on sale 😉 ) in anticipation of this.  It’s got great reviews and seemed to have a quick learning curve *and* be able to tailor its results to the shapes I’m picturing so I thought I’d start with that.

I’ve previously used the Uber shader with its DX11 tesselation options for some simple-mesh floors and walls and know it produces amazing results so hoped its ancestor-sibling — RTP v3 — would be as good for Terrain.

I’ve long wanted to play with DX11 vertex generation for things like grass but knew it would take me down a scarily long rabbit hole so decided to jump-start with the excellent DirectX 11 Grass Shader.

Generating some initial terrain with Gaia was great.  Simple and easy.  As I write this, I’m just about to try my hand at manual stamp use to make something more specific to my needs but I suspect that’ll be smooth and easy too.  Yep — was easy!

Next, configuring RTP for Terrain.  It’s super-powerful but its documentation is tricky (big and somewhat assuming) and the flow is also tricky.  Don’t let that put you off — it’s great, just have a good supply of coffee beforehand 😉  I’ll write about some of the options I fiddled with later (TODO).

Next I added DirectX 11 Grass Shader.  I’ve not used it before.  Will I use it in SnwScf?  Not sure — it’d be super-cool to place it on terrain ‘under’ my voxel snow and have it pop-up when all snow is removed.  That’d require writing from snow to a texture map.  Viable but yet another thing, y’know!  Anyway, this is a learning experiment so for now, I deicded to throw it in there!  Wow.  Initially it appeared everywhere out to some arbitrary draw distance.  (Detailed instructions on fixing this below since people on the forum thread seem to have difficulty here.)  A bit of investigating revealed it took a map of heights to draw.  Getting said map also took some investigating so I’ve noted that at the bottom as well.

Results

Here’s a video of the results

DirectX11 Grass shader feedback

A couple of ideas for the grass shader’s fidelity & performance improvements (that I fedback on the forum)…

A little background for those not familiar:

Since the grass shader is a vertex shader, it requires all the geometry for whatever surface you wish to show the grass upon.  For terrain, that means duplicating your terrain and swapping the grass shader in for the Custom Material.  Right, on to my feedback…

1: Reducing load and mucky shadows

I notice that the duplicated terrain renders beyond the furthest fade distance(s).  A nice option would be whether to return single verticies *at all* for values beyond the max fade distance(s).  When one does the “duplciate terrain & swap Material” trick, if you enable shadow-casting on the grass terrain, it produces yucky patches in the distance where ‘Y-fighting’ is occurring.  With this new option, it wouldn’t.  Until then, one can ameliorate by moving the grass terrain down (y:-0.1) but (a) you’re wasting [CG]PU and (b) it still causes some of the effect in the far far distance.

2: Reducing popping

Regarding ‘popping’, in addition to the pixel accuracy change mentioned in the docs, also increase the “Base Map Dist.” value.  I set it to the “Grass Fade End” value.  In fact here are the values I used that seemed to work well (pending realisitc performance analysis) …

Good DirectX11 Grass settings for use with Terrain

For other users, here’s what I found to work well (so far):

  • Terrain:
    • BaseMapDist: 120
    • Tree & Detail Objects: False (don’t need since you already have the orignal terrain)
    • Remove the collider (you’ve got one on the original terrain)
  • DX11 Grass:
    • LOD Start: 10
    • LOD End: 50
    • Max LOD: 5
    • Grass Fade Start: 50
    • Grass Fade End: 120
    • Min Height: 0

How to get the Colour map from the Terrain

Here’s my flow:

  1. use this script to get the splat map.
    (Note: Gaia has this facility and more in its Utilities too!  Either export Texture Splatmap as before *or* Grass Splatmap — this latter would require a slightly different workflow but allows varying foliage types!)
  2. pull it into Gimp (or whatever layer-capable drawing program)
  3. extracted the green channel (whichever happens to your grass terrain layer)
  4. Use channel selector to turn channel to ‘selection’
  5. Create a new layer
  6. Colour white over the whole image (with that selection set).
    Since the selection keeps an ‘alpha’ like value, your ‘full white’ colouring comes out as levels of white.
  7. Export as PNG and bring into Unity.
  8. Set as DX11 Grass Color Map.

Obviously if you’re using multiple grass types, you’ll need to tweak but I’m sure the same process will work.

HTH, Rupert.

Final Gaia notes for Snowman Scuffle

Here are some notes for future me on what I’ll likely need to do when using Gaia with my game!

Configure different terrain textures.  Likely need to do in both (a) Gaia and (b) its spawner rules (specifically the Coverage Texture Spawner) so that it (a) configures the Terrain correctly and (b) places correctly.  Started work on this — it’s kind’a hard to work with.  This seems a weakness in Gaia.  Might try TerrainComposer v2 which makes *this* part easier but probably makes other things harder(?)

Similarly, I’ll probably need to reconfigure many of the spawners to give distributions I desire for my levels.  That’s ‘designing’ when using Gaia!

If using the ‘circle of stones/whatever’ idea, could probably build a Gaia spawner for just that!

Right, time to try out Terrain Compser v2 to see how that compares!

Anthropomorphic environment?

I’ve been wondering whether I should make the game’s environment more anthropomorphic, i.e. add faces (and possibly some capability for action) to ‘alive’ parts of the environment.  This might act as an extra nudge towards the game’s conservationist message and provides a more immediate feedback loop for the environmental message.

As an example, all trees would have a face.   Maybe they’re always asleep to start with?  Nearby noises wake them and they then look at activity.  Innocuous things mean they eventually go back to sleep.  Worrying things like fires get them worried.  Catching on fire causes them to make terrified faces (and maybe little blowing motions)?  Having the fire put out causes them to make a relieved (re-leaved? 😛 ) face.

A non-obvious ‘live’ thing could also be fire!  An evil cheeky face on each one — with the fire graphic switched from the semi-real it is now to a cartoon style?  Even less sure about this part.  Guess I’ll have to see (a) whether can come up with a suitable look and (b) how it feels?  I’ve mostly finished the fire’s code now so don’t really want to change it from spreading in the natural-sort of way it does.

Anyway, I haven’t fully decided yet.  As always, all thoughts welcome!

p.s. Attached picture was while looking after kids.  Whaddya think?  Trees might be a bit ‘red neck’ with their big foliage-like moustaches? :-{D

Arc trajectory bug fixing, unit tests and MapMagic tangents

Wednesday morning and Saturday work.

Investigating MapMagic — Wednesday’s Unity Asset Store Daily Deal.  Looks like a super procedural generator — both Editor and runtime.  It outputs to a Unity Terrain but the forum suggests it could go otherwise.  I’ve always wondered about a single-player mode that places interesting things at points in an otherwise procedurally generated world.

Decided to buy it.  Lost a few hours to fiddling with it.  Man!  Messing with procedural generation and terrains is a surprising amount of fun!  Kind’a tempted to drop the asset straight into SnwScf project and hack-up a single-player mode.  Not now!  😉

Noticed an oddity: A rolling snowball hitting a snowman hits assertion failure:

SnowballShot !spawned but being destroyed. Not merely done twice in same frame, now:8504, spawnedChangedFrameNumber:8502

Not a problem but indicative of some mistake / mistaken understanding.  Have to come back to this since…

Darn, while capturing screen-shots for this post, I’ve just noticed the aim-assist going completely wrong!?  What looks like a straight shot is actually doing a lob, arcing waaaay over the top of the target.  Great 😐

Screenshot 2016-05-25 09.59.39.png

“Excuse me old bean, I don’t mean to criticize but I’m down here.”

Oddly it really is fine most of the time — I can only seem to prompt this when I’m trying for a screenshot!?  Riiiiiight.  Hopefully it’s something related to pausing the Unity Editor but I can’t really trust that without investigating.

Sigh, I thought that functionality was all sorted.  Guess I need more integration tests … ok, admission time:  By that I mean /some/ integration tests on this area 😦  They broke when I upgraded Unity most recently and I haven’t invested the time to resolve what happened.  Shame on me.  That is one of the downsides of my integration tests — they feel rather too brittle and aren’t runnable without jumping to the integration test scene.  OK, probably the better answer is that this area should be unit tested rather than as well as integration tested.)

So, I did warn this was going to basically be my dev notes.  Often they’re high-level discussions but, “when the going gets maths, the notes get mathsy”.  Erm?  Fair warning: Here be details!

Let’s get some values in here:

Shooter transform:
  pos: 11.05, 0.75, -0.23
  ori: 4.22, 90, 0

Target transform:
  pos: 16.11, 0.64, 0.53
  ori: 355.36, 155.00, -0.00

Targeting logging:
  (targetPos:(16.1, 1.6, 0.5), rbl.velocity:(0.0, 0.0, 0.0), transformToAffect.pos:(11.1, 1.5, -0.7), bulletSpeed:9.857765, shooterVelocity:(0.0, 0.0, 0.0), shotObj.aimingShouldConsiderGravity:True, maxTTL:2)
  => timeToImpact:0.5230598, launchVector:(4.8, 8.5, 1.2)

Obviously launchVector y component 8.5 is causing the lob over the target’s head.  Either enable more detailed logging or debugger.  Going with latter.

targetPos = "(16.1, 1.6, 0.5)"
bulletPos = "(11.1, 1.5, -0.7)"
Follows the "stationary target" code path.
Follows the "gravity-compensating projectile lob" code path and calls my ArcTrajectoryUtils.getLaunchVector() code.
planeVectorToTarget = "(5.0, 1.3)", range = 5.15528f, height = 0.0974731445f
then does getLaunchAngle(range, height, projectileInitialSpeed, gravity), i.e.
getLaunchAngle(x = 5.15528f, y = 0.0974731445f, u = 9.8219f, g = -9.81f) -- this might be the part that needs working, perhaps use as unit test?
for +/-, minus = -58.2228546f so used plus = 77.86665f.
Result from getLaunchAngle() angle = 1.04258835f.  Hm, that's radians = ~60 degrees elevation!
Just to carry this through, it then calls convert3SpaceScalarsToVector(vectorToTarget = "(5.0, 0.1, 1.3)", projectileInitialSpeed = 9.8219f, angle.Value = 1.04258835f)
Eventual launchVector = "(4.8, 8.5, 1.2)"

So, looks like there’s an error in getLaunchAngle(), perhaps its use of +/-.  Unit test time.  Also it’d be nice to get a code path for working situation — is it that the ‘minus’ case always works  when used and the ‘plus’ case always fails when used?

The formula it’s using is from this gamedev.net post.  It’s working well most of the time.  Hmm.

Added unit test for all this code (ArcTrajectoryUtils).  Also added unit tests for general ProjectileAimingUtils.  For the latter, I also wrote an IEqualityComparer<Vector3> so I could use NUnit’s constraint-based “EqualTo().Using()” testing approach with an epsilon value (i.e. the allowed proximity).  (I find it odd that C# doesn’t allow inline implementation of interfaces — had to add an actual concrete implementation to my VectorUtils!?)  Anyway, it allows this:

assert.That(v1, Is.EqualTo(v2).Using(VectorUtils.buildVector3Within(epsilon)));

Covered the area in unit tests.

Ahha!  Found the problem — I’d slightly mistranslated the equation!  Bracket in the wrong place was giving the wrong result in some cases!  Fixed and committed!

Since it’s a bank holiday weekend here in the UK, I’m calling Saturday an early night to pick up for something more fun than bug fixing tomorrow — maybe more fire?

p.s. since I was lost in the depths bug fixing (and parenting) all #ScreenShotSaturday, I tweeted this and was kind’a pleased it got as many Likes as most of my SSS pics!  I get the strong impression all gamedevs wish they could do proper software engineering with things like testing, etc 😉

 

Fire, part 6 and GameCamp 8

Continuing from yesterday’s post.

Decided to circumvent problem of explosions not hitting scenery for now by making the rocket itself start fires.  Worked well — fire spread up tree and onto the floor (as predicted due to the layer situation).  I’ll address with the Component idea I had (rather than splitting to another layer).  That’ll also allow things to be partially inflammable without arbitrary splitting models if needed later.  Done — made a Flammable component.  Yep, that works well.  Here’s a video of current state:

 

Performance is pretty heavy so far — it’ll need tuning to get a sensible number of fires — probably tweak the range values.

One oddity is smoke looking orange.  Obviously it’s caused by my sunlight being that colour since it’s low.  However, the smoke is in shadow so it looks wrong to be orange all the way down.  Will it accept shadows?  Hmm… apparently not (yet) — dev says too heavy performance.  I’ll have to investigate that another time.  Perhaps an interim kludge will be to tweak the smoke colour so it looks more right?  Think that’s also another time since I need to get going to GameCamp!  (Already late due to no. 1 son waking in the night and my consequent oversleeping!)

A new build sees the players dying when they join in the PlayerSetupArena!?  Great.  I’ll debug that on the train on the way in.

(from train)

Yep, the WorldBounds changes I’d made a few weeks back weren’t tuned for other arenas and were killing players at their spawn points!  Fixed in all arenas in build.

Aaaand, in case I forget later (due to being in a pub), I’m going to set this to auto-publish later today!  (what could possibly go wrong?! :-O )

p.s. I enabled Twitter sharing for this blog yesterday so people will actually know what rubbish I write.  OK, they’ll have more evidence 😛

Edit: (on train back from GameCamp 8) Had a lovely time despite only arriving at 3pm! Had a good play testing session with lots of good feedback. In fact, given how I’m trying to be more open and forthcoming, perhaps I ought to copy it all to this blog later! My favourite was one games professor (@drdavient) suggesting an actual viable solution to a common complaint regarding the control system! We’ll have to see how it feels (perhaps as a selectable option per player?)

Fire, part 4

Continuing from last time.

Did a little more investigating on the ANMS fire/cloud not working and realized it wasn’t displaying when the Particle Playground System had a non-1.0 Time effect specified!  (lots of trial and error)

I’m still not seeing normal mapping but I’m now wondering if, given it works when not Emit()ing from script, it’s related to something about Emit() and how it generates the particles — something that interacts badly with the ANMS.  I guess this could be a question to ask the PP author.  Done (and linked on ANMS thread).

Next day, I was investigating the problem and realized the API call I was using mandated supplying a colour:

Emit()(int quantity, Vector3 randomPositionMin, Vector3 randomPositionMax, Vector3 randomVelocityMin, Vector3 randomVelocityMax, Color32 giveColor)

Switched it from the default “white” to black and got shadowing but still lacked yellow, red and orange.

However I *am using* the PP “Rendering (Source) | Color” section to specify a “Lifetime Color” (which ranges from yellow, through orange, red and black to transparent).  I’d hazard that it’s not applying when I’m using this Emit() variant.

Checking the source, it looks like I should have “COLORSOURCEC colorSource” set to “COLORSOURCEC.LifetimeColor”.  Ahha!  Changed the “Color Source” field 2 above and bingo!  It’s perfect.
Fed back on threads.

Here’s how it looks now.
Screenshot 2016-05-17 23.50.38.png
Sadly it looks better static than moving.  When moving, the smoke looks great but the flames flicker too fast.  The original flames looked much better.  Partly it’s the shader (Additive vs. normal), partly it’s the change in spritesheet size.  I guess I could double-up on flames (and/or cut-down on smoke) but this feels like a kludge.  Really, I need that timeScale value to work.  Fed back on PP forum thread to ask.

Out of time tonight.

Just accidentally discovered Alt-Escape minimizes a window in Windows.

Also just accidentally discovered that in the Unity AnimationCurve window, pressing Return allows you to enter an absolute value for a key!

Spent another night fiddling with texture sheets and animation curves in the pursuit of great fire!
It looks good standalone but now looks less good scattered by the script.  Allow it more items or scale these up?  Let’s try the latter.  Yeah, bit better and even feel can halve the number of particles again.  Here it is before that scaling.  (Ignore the fake snowman model — I needed to ensure the fire was the right scale.  Also probably best to ignore the odd music — is it just me that ends up, after screen-capture, realizing you can hear typing sounds and that it’s time to tap the free audio? 😀

A dilemma… Or burn it all!

I’m facing a hard question at the moment:  What to work on next.  It’s always tricky.  The AI is sufficient for play-testing but needs proper trials to reveal the most important areas to improve it next.  There are many areas I’d like to improve it but for showing the game, AI isn’t the most important.  For that, full experience is most important.

Maybe this arena is exhausted?  By that I mean a given environment prompts certain ideas, reveals certain failings, etc.

Hmm, no — I still haven’t done what I’d originally intended with the trees — set fire to them and have them fall down! (A nice destructible environment adds to the fun, I feel.)

So, enough wondering for now.  Fire time!

So, how to implement spreading?  Want lots and efficient.  A GameObject per fire centre might kill things.  Lowest is a single ParticleSystem (perhaps a “Particle Playground” or “PA Particle Field” one for the turbulence?)  Use one-off sphere-cast to detect areas to spread to.  Of course we actually want things affected by the fire so this might be ‘too efficient’.  Also it would be nice to have parts to burn-out — ideally from exhausting fuel which we’ll need to specify per thing affected.  Let’s call YAGNI and do simplest implementation in a new project and test performance.  Then bring into SnwScf actual and see how it performs there.

First version simply has a trigger and spawns on anything entering the trigger. This worked but needed limiting to not overlay itself (since new instances generated new OnTriggerEnter() calls). Also, since triggers give a Collider (not a Collision), I needed to ray-cast to find the actual point on the Collider. I used the obvious and cast from own centre to Collider’s Transform centre but then realized the fire would only spread towards the centre of the subject!

Next switched to a random point from cardinal directions above (picture 8 points at cardinal points around point but car down to surface from above).
To make it more organic, I limited it to one of these points and, after listening to more of this episode of Game Developer’s Radio on juiciness where they discuss good randomness, I decided to implement the Grab Bag Random facility Devon talked about. I’ve put the source up since it’s kinda handy if you want this sort of randomness.

This worked but looked OK on the fence I was testing it on but looked like a game of snake on a flat surface!

Needed to switch to all three points! This fills a surface but would look less good on other shapes and doesn’t handle going down.

Flame-spread-around

Switched to a 3D filling pattern — works well!

Flame-spread-3D.gif

Or, setting spread range to the radius produces this rather dense result!

Screenshot 2016-05-09 00.43.40.png

And, replacing the sphere with some simple Particle Playground flames

 

Of course the prefab I quickly pasted in isn’t a good final version but it shows the idea. The final one will need to build incrementally (so the flames don’t seem to suddenly jump) and generally look better (include some dark multiply particles for contrast or ideally use a volumetric shader).

Anyway, that’s it for Sunday.