Volumetric Fog Shader - Camera Issue

流过昼夜 提交于 2019-12-11 06:03:59

问题


I am trying to build an infinite fog shader. This fog is applied on a 3D plane.
For the moment I have a Z-Depth Fog. And I encounter some issues.

As you can see in the screenshot, there are two views.
The green color is my 3D plane. The problem is in the red line. It seems that the this line depends of my camera which is not good because when I rotate my camera the line is affected by my camera position and rotation.

I don't know where does it comes from and how to have my fog limit not based on the camera position.

Shader

Pass {
        CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            uniform float4      _FogColor;
            uniform sampler2D   _CameraDepthTexture;
            float               _Depth;
            float               _DepthScale;

            struct v2f {
                float4 pos : SV_POSITION;
                float4 projection : TEXCOORD0;
                float4 screenPosition : TEXCOORD1;
            };

            v2f vert(appdata_base v) {
                v2f o;
                o.pos = mul(UNITY_MATRIX_MVP, v.vertex);

            // o.projection = ComputeGrabScreenPos(o.pos);
        float4 position = o.pos;
        #if UNITY_UV_STARTS_AT_TOP
        float scale = -1.0;
        #else
        float scale = 1.0;
        #endif
        float4 p = position * 0.5f;
        p.xy = float2(p.x, p.y * scale) + p.w;
        p.zw = position.zw;
        o.projection = p;

                // o.screenPosition = ComputeScreenPos(o.pos);
        position = o.pos;
        float4 q = position * 0.5f;
        #if defined(UNITY_HALF_TEXEL_OFFSET)
        q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w * _ScreenParams.zw;
        #else
        q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w;
        #endif
        #if defined(SHADER_API_FLASH)
        q.xy *= unity_NPOTScale.xy;
        #endif
            q.zw = position.zw;
        q.zw = 1.0f;
        o.screenPosition = q;

                return o;
            }
            sampler2D _GrabTexture;

            float4 frag(v2f IN) : COLOR {
                float3 uv = UNITY_PROJ_COORD(IN.projection);
                float depth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv));
                depth = LinearEyeDepth(depth);
                return saturate((depth - IN.screenPosition.w + _Depth) * _DepthScale);
            }
        ENDCG
    }


Next I want to rotate my Fog to have an Y-Depth Fog but I don't know how to achieve this effect.


回答1:


I see two ways to acheive what you want:

  1. is to render depth of your plane to texture and calculate fog based on difference of depth of plane and depth of object, 0 if obj depth is less and (objDepth - planeDepth) * scale if it is bigger)

  2. Is to instead of rendering to texture calculate distance to plane in shader and use it directly.

I am not sure what you do since I am not very familiar with Unity surface shaders, but djudging from the code and result something different.




回答2:


It seems that this is caused by _CameraDepthTexture, that's why depth is calculated with the camera position.
But I don't know how to correct it... It seems that there is no way to get the depth from another point. Any idea ?

Here is another example. In green You can "see" the object and the blue line is for me the fog as it should be.



来源:https://stackoverflow.com/questions/17123558/volumetric-fog-shader-camera-issue

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!