Unreal Engine how can we achieve soft Skin Physics in Unreal Engine

darkevilhum

Newbie
Sep 9, 2017
76
71
I see (Except for how you can place a niagara emitter on UV coordinates but other than that i get it
To clarify on that a bit, an image:
ss+(2024-09-08+at+01.51.27).jpg

This is my simple actor which has a function "RenderParticle".

This function takes a render target and a uv position.

It then calculates the local position equivalent of the given UV coordinates relative to the SceneCaptureComponent2D (note that the particle system is childed to this). Then it moves the NiagaraSystem to that local position and activates the emitter and SceneCapture to write the result into a RT. By doing it this way, we're effectively unwrapping as we write to the RT.

So for example, let's say we're working with a 1024 sized render target.

This actor will have the SceneCaptureComponent2D's default (starting position) stored.

Using this, it can map the coordinates, e.g. UV: 0.5,0.5 would be the local position X,0,0 for the NiagaraSystem (dead center of the scene capture on those two axis).
(x doesn't matter, it's the distance from the scenecapture2d but that is set to orthographic and 1024 in this example).

a UV of 0,0 would be local position: X,-512,- 512 (bottom left corner, as I believe thats where UV coords start. Or is it the top left? I forget.

That's it basically.

In my actor I calculate the local position using the defined render target size instead though so it's more flexible. The actual bp code is using lerps I think.

E.g.

float size = 1024
VEC2 UV;

local y position = lerp( UV.x, -(size * 0.5), (size * 0.5) )
local z position = lerp( UV.y, -(size * 0.5), (size * 0.5) )

Does this pseudo code make sense? My actual BP is doing a lot of other unrelated stuff tied to other systems in my project so it would have just been more confusing to show that haha.
 
Last edited:

Velomous

Member
Jan 14, 2024
233
200
I think I mostly get it, so you get a render target with correct relative coordinates sorta the same way we did with the UV Unwrap/Mesh Painting code; Then you get the scene capture actor's relative space Y and Z coordinate and convert them to UV Coords in the way shown by the pseudocode at the bottom; And then you just use the resulting rendertarget as a mask in the material?

(Also UV 0,0 is top left)

And you haven't shown the material yet, but how are you using the mask in the material, it can't be a sphere mask like we did for the meshpainting, you're drawing a semen texture or material of some sort at the desired location, how are you doing that?

I remember I spent a lot of time trying to paint a tattoo on a mesh with the uv unwrap mask but it was absolutely hopeless, I simply couldn't figure it out.

I am also a bit confused how the calculated UV coordinates can match the real coordinates so well because the UV islands can be laid out entirely different ways on each mesh so how did you get it to look so accurate? Is it just pure chance, your mesh just has just the right kind of uv layout for it to work? Or am I missing something else? (That was if i recall the whole reason why we had to use a UV Unwrap for the mesh painting rather than doing it the way you've described).

Actually that reminds me, the semen particle test you did a while back looked pretty good too in the sense that, that method would follow gravity no matter what position the character is in and it would have no problems with uv seams. How complex was that method? I remember you mentioning needing to write particles to every vertex of the mesh (that could be expensive with a high density mesh?). Could that be optimized to place particles only where needed? I suspect Head Game works similarly on that front.
It's surprisingly less heavy than you'd expect (probably cheaper than scene capture at least) for the sheer number of particles, but you must set the emitter properties to GPU, when you said (way) earlier in the thread you had performance issues with some particle thing i'm pretty sure it was just that you were spawning a lot of particles on CPU, unless you have a really bad GPU...

And yes it is possible to optimize it by placing particles only in localized areas, I theorized about this a bit back when i was covering niagara softbody, however to do that you need to identify the individual triangles where you want it to show, i remember you can specify a range, but i think you could also specify individual triangles (maybe with an array or something), been a while since i looked at it though but it should definitely be possible, but the part of identifying the triangles could potentially be very complicated, I just ain't sure.

As for how I did it, it was not very complicated, I believe I explained the process here in detail, if memory serves, only changed i've made since then was changing that ugly scratchpad to use a loop instead of checking each particle individually with it's own code (performance wise it shouldn't change anything, just looks better and is easier to work with). The process for spawning particles on each triangle is the same as described in the softbody tutorial and it's surprisingly less expensive than you'd think as you can see in that same post. There is a limit where things become super laggy but that limit is high and if you are already using niagara softbody you could potentially plug this semen effect into that to use the already spawned particles for the softbody effect as a guide for the semen ones.

The trick here, is that you need to track each individual 'projectile' particle that was spawned (so the key to making this effect succesful performance wise is to keep the number of projectiles low. I believe I used 10 or so)

The biggest problems with the technique, rather than performance, is that niagara collisions are finicky. I have a few potential solutions to this problem (for instance cpu and gpu particle collisions are very different, i think the cpu ones are way moer accurate, so one potential solution is to use cpu collisions, but you cannot within niagara forward data from a cpu emitter to a gpu one or vice versa, so you would have to instead forward the data to a blueprint and then send it back from the blueprint to the gpu emitter, i've done this before for another thing) but an alternate solution would be to check for proximity rather than collisions; that way the triangle particles would (probably) always pick up the projectile ones; and to make it work properly do some vector math (velocity & dot product) to check which way the particle is moving (towards or away from the triangle) and only mark it if the projectile was moving towards the triangle when it entered the desired proximity. (the downside then would be that when particles fail to collide and pass through the mesh, you still would not get the leaking effect).

I also think a way to optimize would be to despawn the triangle particles a set amount of time after the intiial collision or just an intiial amount of time generally speaking (i'm already doing this for my effect, we do not want a duplicate set of per-triangle particles after all), another could be to only spawn them on triangles that are on the same side of the body as the projectile effect is coming from; or perhaps a more advanced version that checks if the triangle is in unobstructed LOS from the emitter/system location (this is where the projectile will actually come from) and only spawn them if so, that would potentially reduce the particles by half.
 

darkevilhum

Newbie
Sep 9, 2017
76
71
I think I mostly get it, so you get a render target with correct relative coordinates sorta the same way we did with the UV Unwrap/Mesh Painting code; Then you get the scene capture actor's relative space Y and Z coordinate and convert them to UV Coords in the way shown by the pseudocode at the bottom; And then you just use the resulting rendertarget as a mask in the material?
Yep that's exactly right.

And you haven't shown the material yet, but how are you using the mask in the material, it can't be a sphere mask like we did for the meshpainting, you're drawing a semen texture or material of some sort at the desired location, how are you doing that?

I remember I spent a lot of time trying to paint a tattoo on a mesh with the uv unwrap mask but it was absolutely hopeless, I simply couldn't figure it out.

I am also a bit confused how the calculated UV coordinates can match the real coordinates so well because the UV islands can be laid out entirely different ways on each mesh so how did you get it to look so accurate? Is it just pure chance, your mesh just has just the right kind of uv layout for it to work? Or am I missing something else? (That was if i recall the whole reason why we had to use a UV Unwrap for the mesh painting rather than doing it the way you've described).
So the main function of the material is dead simple, it quite literally just lerps between the base skin colour and white, using the mask as the alpha of the lerp. We don't need a sphere mask because the shape of the semen all comes from the mask. As the render target is written to for a few seconds because its basically recording a "dripping" particle effect. We could record any kind of effect here, but by doing this, all you have to do to get some basic effect is apply the mask directly to the mesh.

You don't see any seams there because in my gif im just hitting a single mesh part/texture. As with all Gen8 Daz figures, the torso/arms/legs are separate. With this method, the caveats are that
  1. It doesnt follow gravity but rather the "UV down"
  2. It will ofcourse clip into seams and vanish. (This is partly why i made the dripping of the particle very short)

The material just has a lot in it to make it look good by using the mask to make a normal map, roughness, some noise to make the colour less linear etc.

Making it look good was the bulk of the material and why its too messy to share atm:
You don't have permission to view the spoiler content. Log in or register now.

But the actual logic for using the mask is:
You don't have permission to view the spoiler content. Log in or register now.

I suspect your confusion is coming from the mask itself and how I'm making it. Rather than calling it a mask, it might make more sense to call it ... semen on a black background.
 
Last edited:

Velomous

Member
Jan 14, 2024
233
200
my cum try, not super cool but cheap
That's pretty good, how'd you do it?


Yep that's exactly right...

I suspect your confusion is coming from the mask itself and how I'm making it. Rather than calling it a mask, it might make more sense to call it ... semen on a black background.
So the rendertarget is drawing a particle that looks like semen which is then lerped onto the material with the code you linked at the bottom?

I tried to replicate the rendertarget stuff, the lerp pseudocode example seems like it was missing a bunch of stuff, for instance if you lerp with the y position as alpha between 512 and -512, you're gonna get non-zero values in the alpha so you'll get very big numbers., it'd only work if you use a percentage for the X and Y alpha which means you need to limit it to some sort of bound, what is the bound you're using? Could you maybe just show me the BP? Just highlight the relevant nodes by selecting them maybe?

I feel like I am missing something crucial :unsure: i'll probably need to play with this a bit more before I truly get it but seeing your actual bp code would help even if it's messy.
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
76
71
That's pretty good, how'd you do it?



So the rendertarget is drawing a particle that looks like semen which is then lerped onto the material with the code you linked at the bottom?

I tried to replicate the rendertarget stuff, the lerp pseudocode example seems like it was missing a bunch of stuff, for instance if you lerp with the y position as alpha between 512 and -512, you're gonna get non-zero values in the alpha so you'll get very big numbers., it'd only work if you use a percentage for the X and Y alpha which means you need to limit it to some sort of bound, what is the bound you're using? Could you maybe just show me the BP? Just highlight the relevant nodes by selecting them maybe?

I feel like I am missing something crucial :unsure: i'll probably need to play with this a bit more before I truly get it but seeing your actual bp code would help even if it's messy.
That's correct yeah. The material receives 0-1 for the u and v. In the blueprint we convert the -512 to 512 range back to uv range (0-1) by dividing by 1024 before we send it over to the material.

These are the main functions in that actor bp I snapshotted earlier.

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

It's not much more than that, the other two functions are just variable setting and Init where we configure the scene capture with a showonly list etc. These are two long for me to screen shot haha
 
  • Like
Reactions: Velomous

darkevilhum

Newbie
Sep 9, 2017
76
71
This is a good watch I think . I believe it's even possible to use this to unwrap a skeletal mesh without ever touching a scene capture component or unwrap material. This is probably the best route for semen effects and similar but it's not high on my list of things to investigate atm. I've watched that video a few times and it's still a lot to digest.
 
  • Like
Reactions: Velomous

mikeblack

Newbie
Oct 10, 2017
34
26
Not had a ton of time to work on this lately so I'll share the soft skin material function I've put together so far.

The main soft skin function:

The main function relies on this smaller function which in turn relies on the following two functions: , Edit: Also need this small utility function for calculating a normal from a mask MF_GetNormal

I don't think this function is entirely in a state to just plug and play into an existing skin shader (you could certainly try) but it should atleast give a good understanding of how we can create both a soft impact effect and soft displacement.

I tried to keep them as organised/explanatory as possible but they are still very much a W.I.P. As seen by the new displacement section where I'm experimenting with the Distance Field stuff that Velomous pointed out.

Edit: Was tired yesterday, forgot to mention the context/usage of this material function. It works like any typical Material Function with attributes, you can add it to an existing skin material as you would any other.
The inputs are all static except for the RenderTargetMask which is created at runtime by using sphere masks wherever there are collisions with the character using this tutorial: . This render target texture is specifically for the impact effect.
Thanks for sharing this. Great stuff.
Got a soft skin working with it.

soft.gif
 

mikeblack

Newbie
Oct 10, 2017
34
26
I mean yeah, my fps was dipping down to 40 when the scenecapture went off which made mesh painting prohibitively expensive, mind you that fps was already locked to 60 so it could easily have been costing more than 60fps to do a scene capture like that everytime it was done. And from the moment I first heard abuot this method my first thought was "ok, so they're unwrapping the UV every frame that they're painting... why? why don't they just unwrap it once?", then I tried it and it just worked, I was fully expecting to get stuck on it since it was such an obvious optimization that surely people would have tried. but apparently not.
Tried to remove scene capture in my setup but without success. Not fully understanding it yet so this might be wrong but my guess why some use scene capture every time:
1. if paint should be added with each step scene capture calls are required as without the previous paint is lost.
2. if it's to capture a paint state with switch to the original material afterwards a scene capture is required as with the material switch the data will be lost.
So it depends on what it's used for.
 

Velomous

Member
Jan 14, 2024
233
200
I haven't given an update since last time because i've been taking a little bit of a break from unreal. It can get a bit frustrating sometimes (especially when we're going this far outside the box) so I've been playing with godot a bit recently; I'll be continuing my attempts soon though.

Tried to remove scene capture in my setup but without success. Not fully understanding it yet so this might be wrong but my guess why some use scene capture every time:
1. if paint should be added with each step scene capture calls are required as without the previous paint is lost.
2. if it's to capture a paint state with switch to the original material afterwards a scene capture is required as with the material switch the data will be lost.
So it depends on what it's used for.
1. Nah you just set the scene capture rendering mode thing to additive, then it doesn't wipe the previous paint data.
2. Even if you switch materials, the rendertargets wouldn't change so you wouldn't need it then either.

If you're feeling lost with the mesh painting this should be everything you need:

We're unwrapping the UV with a scene capture, but only once instead of every frame. Technically it might be possible to just unwrap it once and save it to a rendertarget, then just use that render target (never unwrap it again, just implement the resulting unwrapped rendertarget into ur code in a more permanent way) which would allow you to simplify the code a bit and remove the scene capture entirely, although I didn't take it that far myself.
 
  • Like
Reactions: mikeblack

darkevilhum

Newbie
Sep 9, 2017
76
71
Tried to remove scene capture in my setup but without success. Not fully understanding it yet so this might be wrong but my guess why some use scene capture every time:
1. if paint should be added with each step scene capture calls are required as without the previous paint is lost.
2. if it's to capture a paint state with switch to the original material afterwards a scene capture is required as with the material switch the data will be lost.
So it depends on what it's used for.
If you're referring to the soft body ripple/impact effect setup, then yeah you most likely can't implement Velomous's scene capture performance change. Reason being that unwrapped render target serves as a data container of "an impact occurred here". And then the material uses that data to create the ripple and fade it out over time resulting in the effect vanishing and the material outputting black/0 (no effect).

The next time you apply another impact/ripple, it would need to clear the render target and therefore unwrap it once more. Otherwise you'd have two impact/ripples on the second time and that would increase with each impact.
 
  • Like
Reactions: Velomous

mikeblack

Newbie
Oct 10, 2017
34
26
1. Nah you just set the scene capture rendering mode thing to additive, then it doesn't wipe the previous paint data.
2. Even if you switch materials, the rendertargets wouldn't change so you wouldn't need it then either.

If you're feeling lost with the mesh painting this should be everything you need:
You are right it does work without. My setup was a bit different to the one in your linked video. Was relying only on one render target instead of two and no Draw Material to Render Target just a Capture Scene for every paint. Didn't know about the Draw Material to Render Target node and it's what I was missing.

If you're referring to the soft body ripple/impact effect setup, then yeah you most likely can't implement Velomous's scene capture performance change. Reason being that unwrapped render target serves as a data container of "an impact occurred here". And then the material uses that data to create the ripple and fade it out over time resulting in the effect vanishing and the material outputting black/0 (no effect).

The next time you apply another impact/ripple, it would need to clear the render target and therefore unwrap it once more. Otherwise you'd have two impact/ripples on the second time and that would increase with each impact.
Yes it's for the impact effect. Got it to work with the setup of the video (+improvement) Velomous did link above which uses two render targets. The one with the hit can then be cleared before painting the impact to get rid of the increase with each impact.
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
76
71
You are right it does work without. My setup was a bit different to the one in your linked video. Was relying only on one render target instead of two and no Draw Material to Render Target just a Capture Scene for every paint. Didn't know about the Draw Material to Render Target node and it's what I was missing.


Yes it's for the impact effect. Got it to work with the setup of the video (+improvement) Velomous did link above which uses two render targets. The one with the hit can then be cleared before painting the impact to get rid of the increase with each impact.
Ah gotcha, with two RTs that will definitely solve the problem. I suspect you can go as far as using a very low res texture for both RTs as long as the end result is good enough for your personal needs.
 

TheExordick

Member
Sep 25, 2021
102
103
This is a good watch I think . I believe it's even possible to use this to unwrap a skeletal mesh without ever touching a scene capture component or unwrap material. This is probably the best route for semen effects and similar but it's not high on my list of things to investigate atm. I've watched that video a few times and it's still a lot to digest.
Niagara is so powerful and so intimidating to use! thanks for the link, very informative.

Also, thanks guys for sharing your experience. This thread is gold.
 

Velomous

Member
Jan 14, 2024
233
200
Niagara is so powerful and so intimidating to use! thanks for the link, very informative.

Also, thanks guys for sharing your experience. This thread is gold.
I actually have a tutorial series recorded I could share for the basics of niagara, the basics of niagara is a 2.2gb vid collection, another collection mostly about putting those basics together is 5.5 gigs and a final one with more advanced stuff like scratch is also 5 gigs.

It's a paid course that I recorded as I went through it because I hated how unreliable udemy tends to be (frequently had issues where videos would freeze up partway through and stop loading, or just wouldn't load at all) so the intent for recording it was mainly just so I could re-visit the videos without needing togo through udemy's garbo web system.

The problem is I've never really shared anything like this so I'm not sure what the best way to upload it (or where to upload it) would be.
 

darkevilhum

Newbie
Sep 9, 2017
76
71
I actually have a tutorial series recorded I could share for the basics of niagara, the basics of niagara is a 2.2gb vid collection, another collection mostly about putting those basics together is 5.5 gigs and a final one with more advanced stuff like scratch is also 5 gigs.

It's a paid course that I recorded as I went through it because I hated how unreliable udemy tends to be (frequently had issues where videos would freeze up partway through and stop loading, or just wouldn't load at all) so the intent for recording it was mainly just so I could re-visit the videos without needing togo through udemy's garbo web system.

The problem is I've never really shared anything like this so I'm not sure what the best way to upload it (or where to upload it) would be.
That would be a huge help. Though for uploading it I'm not sure, perhaps google drive with a fresh email? I think they give you like 15gb for free.
 

mikeblack

Newbie
Oct 10, 2017
34
26
Tried Niagara to see if it might work for a cum flowing down the body effect. Looks like it's difficult to spawn the particles at the correct location with the effect. Would need a vertices/triangle id of the hit position on the skeletal mesh which is tricky to get. Upside vs the material approach would be flow towards gravity.
niagara.gif
 

Velomous

Member
Jan 14, 2024
233
200
Tried Niagara to see if it might work for a cum flowing down the body effect. Looks like it's difficult to spawn the particles at the correct location with the effect. Would need a vertices/triangle id of the hit position on the skeletal mesh which is tricky to get. Upside vs the material approach would be flow towards gravity.
It ain't that hard to get accurate location, i've done it before here, it's not exactly an elegant solution though but I imagine yuo must be doing something relatively similar. It's not without it's problems though, i think if I revisited this I would try to move the semen projectile from the niagara emitter to being just a normal physical mesh because the niagara collisions tend not to be very reliable, especially for gpu particles., i think for the regular collision approach on gpu particles thhey only collide if they are rendered, and they only render if they are not obscured by anything (such as the mesh or even another particle)

It is a problem I could solve with scratch, but I think just moving the projectile itself to a blueprint actor and forwarding it's collision & position data to the niagara emitter would be simpler at the end of the day.

It's also possible to use CPU particles for the projectile, but to forward data from CPU particles to GPU particles the only way i know is to send the data from the cpu particle to a blueprint actor and then send it back from the blueprint actor to the GPU particle emitter which is a mite convoluted.
 
  • Like
Reactions: mikeblack

darkevilhum

Newbie
Sep 9, 2017
76
71
Tried Niagara to see if it might work for a cum flowing down the body effect. Looks like it's difficult to spawn the particles at the correct location with the effect. Would need a vertices/triangle id of the hit position on the skeletal mesh which is tricky to get. Upside vs the material approach would be flow towards gravity.
View attachment 4053914
Looks like it has potential. I use plugin to handle any kind of data retrieval when tracing against a skeletal mesh. Can get the hit UVs, triangle, vertex position etc. It's super handy.