文章目录

 

1.前言

在unity实现毛玻璃(磨砂玻璃)效果,则需要三要素:抓取屏幕、根据渲染体在屏幕上的坐标获取屏幕像素、模糊处理。本文基于Unity官方的实现进行说明,并对一些节点进行适当的展开说明。

2.抓取屏幕

抓取屏幕方法有多种,在此不介绍原理了,只介绍方法。

2.1 Unity C#接口

借助于RenderTexture和ReadPixels接口,如此文所示

2.2 CommandBuffer

借助于CommandBuffer就可以在合适的时机将屏幕数据抓取下来,如下所示:

buf = new CommandBuffer();
		buf.name = "Grab screen and blur";
		m_Cameras[cam] = buf;

		// copy screen into temporary RT
		int screenCopyID = Shader.PropertyToID("_ScreenCopyTexture");
		buf.GetTemporaryRT (screenCopyID, -1, -1, 0, FilterMode.Bilinear);
		buf.Blit (BuiltinRenderTextureType.CurrentActive, screenCopyID);

在shader中可以通过sampler2D _GrabBlurTexture;进行屏幕数据采样。这也是官方demo中给的示例。

2.3 GrabPass

可以在shader中通过Grabpass直接抓取屏幕:
作为一个pass模块来抓取屏幕,在shader中通过定义GrabPass{}即可实现。GrabPass 根据里面的内容,有两种形式:
1)如果有提供纹理名,如GrabPass{“_TextureName”},则在每一帧为第一个使用名为 “_TextureName” 的纹理的物体进行一次屏幕抓取操作,可以在其它 Pass 中被访问。
2) 没有提供纹理名,Pass 内留空,此时对于每一个使用了 GrabPass 的物体,都会进行一次屏幕抓取操作,内部使用 “_GrabTexture” 来访问屏幕图像

GrabPass 使用方便,但对性能不友好,尤其是移动端。使用demo可以参考此文。

3.获取屏幕坐标

有多种方法可以获取当前渲染对象每个片元在屏幕上的uv坐标。

3.1 ComputeScreenPos方法

即在shader中使用ComputeScreenPos就可以获取屏幕坐标点。在顶点着色器中调用 o.screenPos = ComputeScreenPos (o.pos);,然后在片元着色器中通过float2 screenPos = i.screenPos.xy/i.screenPos.w;即可获得uv坐标。

3.2 VPOS语义

通过此方法也可以实现,但是必须是shader model 3.0,而且在同一个vertex-fragment结构中VPOS语义与SV_POSITION语义无法共存,所以需要特殊处理,如下unity用户手册中例子所示:

Shader "Unlit/Screen Position"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma target 3.0

            // note: no SV_POSITION in this struct
            struct v2f {
                float2 uv : TEXCOORD0;
            };

            v2f vert (
                float4 vertex : POSITION, // vertex position input
                float2 uv : TEXCOORD0, // texture coordinate input
                out float4 outpos : SV_POSITION // clip space position output
                )
            {
                v2f o;
                o.uv = uv;
                outpos = UnityObjectToClipPos(vertex);
                return o;
            }

            sampler2D _MainTex;

            fixed4 frag (v2f i, UNITY_VPOS_TYPE screenPos : VPOS) : SV_Target
            {
                // screenPos.xy will contain pixel integer coordinates.
                // use them to implement a checkerboard pattern that skips rendering
                // 4x4 blocks of pixels

                // checker value will be negative for 4x4 blocks of pixels
                // in a checkerboard pattern
                screenPos.xy = floor(screenPos.xy * 0.25) * 0.5;
                float checker = -frac(screenPos.r + screenPos.g);

                // clip HLSL instruction stops rendering a pixel if value is negative
                clip(checker);

                // for pixels that were kept, read the texture and output it
                fixed4 c = tex2D (_MainTex, i.uv);
                return c;
            }
            ENDCG
        }
    }
}

3.3 SV_POSITION 语义

这种方式也是偶然间看到的,第一次处理屏幕uv坐标时层考虑过这种方法,但保险起见未用。不确定是否会有其他影响

struct v2f
{
	float4 pos : SV_POSITION;
	float2 uv : TEXCOORD0;								
};

v2f vert (appdata v)
{
	v2f o;
	o.pos = UnityObjectToClipPos(v.vertex);
	o.uv = TRANSFORM_TEX(v.uv, _MainTex);
	return o;
}

float4 frag (v2f i) : SV_Target
{
	float2 screenPos = i.pos.xy;
	return tex2D(_MainTex, i.uv.xy);
}

3.4 ComputeGrabScreenPos

此方法与ComputeScreenPos有点类似,在一些情况下或者大部分情况下是相同的,但是如果是抓取屏幕GrabPass,那么尽量使用ComputeGrabScreenPos,如果是只是在屏幕空间进行一些计算使用ComputeScreenPos,两者的区别参考此文

4.背景模糊

抓取屏幕背景后进行模糊的方法有太多,此处不多特殊解释,直接上unity实现。
1)CS脚本文件

using UnityEngine;
using UnityEngine.Rendering;
using System.Collections.Generic;

// See _ReadMe.txt for an overview
[ExecuteInEditMode]
public class CommandBufferBlurRefraction : MonoBehaviour
{
	public Shader m_BlurShader;
	private Material m_Material;

	private Camera m_Cam;

	// We'll want to add a command buffer on any camera that renders us,
	// so have a dictionary of them.
	private Dictionary<Camera,CommandBuffer> m_Cameras = new Dictionary<Camera,CommandBuffer>();

	// Remove command buffers from all cameras we added into
	private void Cleanup()
	{
		foreach (var cam in m_Cameras)
		{
			if (cam.Key)
			{
				cam.Key.RemoveCommandBuffer (CameraEvent.AfterSkybox, cam.Value);
			}
		}
		m_Cameras.Clear();
		Object.DestroyImmediate (m_Material);
	}

	public void OnEnable()
	{
		Cleanup();
	}

	public void OnDisable()
	{
		Cleanup();
	}

	// Whenever any camera will render us, add a command buffer to do the work on it
	public void OnWillRenderObject()
	{
		var act = gameObject.activeInHierarchy && enabled;
		if (!act)
		{
			Cleanup();
			return;
		}
		
		var cam = Camera.current;
		if (!cam)
			return;

		CommandBuffer buf = null;
		// Did we already add the command buffer on this camera? Nothing to do then.
		if (m_Cameras.ContainsKey(cam))
			return;

		if (!m_Material)
		{
			m_Material = new Material(m_BlurShader);
			m_Material.hideFlags = HideFlags.HideAndDontSave;
		}

		buf = new CommandBuffer();
		buf.name = "Grab screen and blur";
		m_Cameras[cam] = buf;

		// copy screen into temporary RT
		int screenCopyID = Shader.PropertyToID("_ScreenCopyTexture");
		buf.GetTemporaryRT (screenCopyID, -1, -1, 0, FilterMode.Bilinear);
		buf.Blit (BuiltinRenderTextureType.CurrentActive, screenCopyID);
		
		// get two smaller RTs
		int blurredID = Shader.PropertyToID("_Temp1");
		int blurredID2 = Shader.PropertyToID("_Temp2");
		buf.GetTemporaryRT (blurredID