记录下:这个月离开了待了将近4年的游戏公司,转而加入一家做云渲染的公司,以图形程序的岗位入职,一进来就帮忙支持了像镜面反射以及urp或是hdrp下拷贝贴图的pass实现。

而安排的项目是云渲染相关,目的是为了主机端能有着不错的表现和效能,因此趁着开工前,看了unity渲染相关的内容,趁着闲下来的时候,再这里对自己看到的内容做下简要的总结。

        一、首先unity官方提供了云渲染支持的Demo,链接如下:About Unity Render Streaming | Unity Render Streaming | 3.1.0-exp.6

代码看的是exp.3,插件包怎么安装和使用就不具体说明了。

        二、提供了相关Demo,包含了启动作为搭建服务器业务逻辑的场景Boardcast.unity以及作为客户端的Receiver或者renderpipline等场景。

        三、通讯是基于WebRTC插件包,具体安装该插件包的时候就会自动做依赖,然后使用的时候还需要开启外部服务器webserver.exe,这个在文档里也说明了。

        四、看到这一插件包的时候,一个让我好奇的是服务器怎样将画面传输给客户端,另外一个客户端输入字母传输给服务器,这两点,我会根据这几天看的代码会做稍微细致的解读。

  1、画面怎样传输?

        a、画面传输是服务器发送画面给客户端,因此打开BroadCast.unity 场景,可以看到camera下有个叫ScreenStreamSender.cs s. 该脚本创建了一个RT将其设置到创建的VedioStreamTrack,VedioStreamTrack负责将该RT传输到远端

protected override MediaStreamTrack CreateTrack()
        {
            RenderTexture rt;
            if (m_sendTexture != null)
            {
                rt = m_sendTexture;
                RenderTextureFormat supportFormat =
                    WebRTC.WebRTC.GetSupportedRenderTextureFormat(SystemInfo.graphicsDeviceType);
                GraphicsFormat graphicsFormat =
                    GraphicsFormatUtility.GetGraphicsFormat(supportFormat, RenderTextureReadWrite.Default);
                GraphicsFormat compatibleFormat = SystemInfo.GetCompatibleFormat(graphicsFormat, FormatUsage.Render);
                GraphicsFormat format = graphicsFormat == compatibleFormat ? graphicsFormat : compatibleFormat;

                if (rt.graphicsFormat != format)
                {
                    Debug.LogWarning(
                        $"This color format:{rt.graphicsFormat} not support in unity.webrtc. Change to supported color format:{format}.");
                    rt.Release();
                    rt.graphicsFormat = format;
                    rt.Create();
                }

                m_sendTexture = rt;
            }
            else
            {
                RenderTextureFormat format =
                    WebRTC.WebRTC.GetSupportedRenderTextureFormat(SystemInfo.graphicsDeviceType);
                rt = new RenderTexture(streamingSize.x, streamingSize.y, depth, format) { antiAliasing = antiAliasing };
                rt.Create();
                m_sendTexture = rt;
            }

            // The texture obtained by ScreenCapture.CaptureScreenshotIntoRenderTexture is different between OpenGL and other Graphics APIs.
            // In OpenGL, we got a texture that is not inverted, so need flip when sending.
            var isOpenGl = SystemInfo.graphicsDeviceType == GraphicsDeviceType.OpenGLCore ||
                           SystemInfo.graphicsDeviceType == GraphicsDeviceType.OpenGLES2 ||
                           SystemInfo.graphicsDeviceType == GraphicsDeviceType.OpenGLES3;

            //return new VideoStreamTrack(rt, isOpenGl);
            m_curTrack = new VideoStreamTrack(rt, isOpenGl);

            return m_curTrack;
        }

        b、既然知道传输的是这张RT,那就是需要知道怎样维护和更新这张RT,Demo中使用


ScreenCapture.CaptureScreenshotIntoRenderTexture(m_screenTexture);每帧读取屏幕项目到该RT中然后再blit到m_sendTexture:


IEnumerator RecordScreenFrame()
        {
            while (true)
            {
                OnUpdateFramed?.Invoke();
                yield return new WaitForEndOfFrame();

                if (!connections.Any() || m_sendTexture == null || !m_sendTexture.IsCreated())
                {
                    continue;
                }

                if (m_precount == m_curcount)
                {
                    continue;
                }
                else
                {
                    m_precount = m_curcount;
                }

                m_curTrack.setCameraRT(m_Camera.transform.rotation.eulerAngles, m_Camera.transform.position, Convert.ToInt64(timestamp), renderTime);
                ScreenCapture.CaptureScreenshotIntoRenderTexture(m_screenTexture);
                Graphics.Blit(m_screenTexture, m_sendTexture, material);
            }
        }


    c、此前再profiler性能时,看到了一个大的耗时:刚开错误的判断是CaptureScrrenshotIntoRenderTexture的开销,于是想到了直接再管线中增加一pass来读取RT,取消掉刚函数的调用。但是后来发现还是存在该消耗,于是仔细深入的追踪相关代码,发现是底层WebRTC在真正发送这张RT的时候通过往cammandbuffer插入指令的方式来编码导致的卡顿:


public static void Encode(IntPtr callback, IntPtr track)
        {
            _command.IssuePluginEventAndData(callback, (int)VideoStreamRenderEventId.Encode, track);
            Graphics.ExecuteCommandBuffer(_command);
            _command.Clear();
        }

 要优化会比较复杂(其实后面新版本测试时间其实是做了优化,时间只有2.+ms)。

2、输入系统

        a、首先输入系统是基于unity的inputSystem

        b、服务器使用InputReceive.cs(挂在broadcast.unity场景下的主相机下)作为与业务上层交互的组件,可以自由的绑定收到客户端输入后的事件

怎么在云服务器内搭建unity游戏服务器 unity云构建_unity

        c、 客户端使用InputSender与之对应,具体可以在对应的客户端场景中搜索到相关组件

        d、底层在实现InputReceiver和InputSender还是令人好比较好奇的,这里结合我看到的说几点:

        d.1:InputSender注册了InputSystem的事件,将收到的事件做成消息包发送到服务器    

public Sender()
        {
            InputSystem.onEvent += OnEvent;
            InputSystem.onDeviceChange += OnDeviceChange;
            InputSystem.onLayoutChange += OnLayoutChange;

            _onEvent = (InputEventPtr ptr, InputDevice device) => { onEvent?.Invoke(ptr, device); };
            _corrector = new InputPositionCorrector(_onEvent);
        }

    

private unsafe void SendEvent(InputEventPtr eventPtr, InputDevice device)
        {
            if (m_Subscribers == null)
                return;

            REVIEW: we probably want to have better control over this and allow producing local events
                    against remote devices which *are* indeed sent across the wire
            // Don't send events that came in from remote devices.
            if (device != null && device.remote)
                return;

            var message = NewEventsMsg.Create(eventPtr.data, 1);
            if (first_send)
            {
                TimeSpan ts = DateTime.UtcNow - new DateTime(1970, 1, 1, 0, 0, 0, 0);
                Debug.LogError("time---lxf first_send:" + ts.TotalMilliseconds.ToString());
                first_send = false;
            }
            Send(message);
        }

        d.2:InputReceiver收到消息包后做解包构造成InputSystem需要的Event喂给InputSystem触发输入

void IObserver<Message>.OnNext(Message msg)
        {
            switch (msg.type)
            {
                case MessageType.Connect:
                    ConnectMsg.Process(this);
                    break;
                case MessageType.Disconnect:
                    DisconnectMsg.Process(this, msg);
                    break;
                case MessageType.NewLayout:
                    NewLayoutMsg.Process(this, msg);
                    break;
                case MessageType.RemoveLayout:
                    RemoveLayoutMsg.Process(this, msg);
                    break;
                case MessageType.NewDevice:
                    NewDeviceMsg.Process(this, msg);
                    break;
                case MessageType.NewEvents:
                    InputStaticData.OnCameraPosValueChange?.Invoke(msg.posX, msg.posY, msg.posZ, msg.rotateX, msg.rotateY, msg.rotateZ, msg.fx, msg.fy, msg.cx, msg.cy, msg.timestamp);
                    NewEventsMsg.Process(this, msg);
                    break;
                case MessageType.ChangeUsages:
                    ChangeUsageMsg.Process(this, msg);
                    break;
                case MessageType.RemoveDevice:
                    RemoveDeviceMsg.Process(this, msg);
                    break;
                case MessageType.StartSending:
                    StartSendingMsg.Process(this);
                    break;
                case MessageType.StopSending:
                    StopSendingMsg.Process(this);
                    break;
            }
        }
public static unsafe void Process(InputRemoting Receiver, Message msg)
            {
                var manager = Receiver.m_LocalManager;

                fixed (byte* dataPtr = msg.data)
                {
                    var dataEndPtr = new IntPtr(dataPtr + msg.data.Length);
                    var eventCount = 0;
                    var eventPtr = new InputEventPtr((InputEvent*)dataPtr);
                    var senderIndex = Receiver.FindOrCreateSenderRecord(msg.participantId);

                    while ((Int64)eventPtr.data < dataEndPtr.ToInt64())
                    {
                        // Patch up device ID to refer to local device and send event.
                        var remoteDeviceId = eventPtr.deviceId;
                        var localDeviceId = Receiver.FindLocalDeviceId(remoteDeviceId, senderIndex);
                        eventPtr.deviceId = localDeviceId;

                        if (localDeviceId != InputDevice.InvalidDeviceId)
                        {
                            TODO: add API to send events in bulk rather than one by one
                            manager.QueueEvent(eventPtr);
                        }

                        ++eventCount;
                        eventPtr = eventPtr.Next();
                    }
                }
            }
public override void QueueEvent(InputEventPtr ptr)
        {
            InputDevice device = InputSystem.GetDeviceById(ptr.deviceId);

            // mapping sender coordinate system to receiver one.
            if (EnableInputPositionCorrection && device is Pointer && ptr.IsA<StateEvent>())
            {
                _corrector.Invoke(ptr, device);
            }
            else
            {
                base.QueueEvent(ptr);
            }
        }
public virtual void QueueEvent(InputEventPtr eventPtr)
        {
            InputSystem.QueueEvent(eventPtr);
        }

        d.3:在服务器,BoardCastSample.cs绑定了InputSystem相关事件到脚本 SimpleCameraControllerV2.cs,将客户端相关输入针对场景中的相机位置,旋转等做相应的调度。

private void Start()
        {
            SyncDisplayVideoSenderParameters();

            if (renderStreaming.runOnAwake)
                return;
            if(settings != null)
                renderStreaming.useDefaultSettings = settings.UseDefaultSettings;
            if (settings?.SignalingSettings != null)
                renderStreaming.SetSignalingSettings(settings.SignalingSettings);
            renderStreaming.Run();

            inputReceiver.OnStartedChannel += OnStartedChannel;
            var map = inputReceiver.currentActionMap;
            map["Movement"].AddListener(cameraController.OnMovement);
            map["Look"].AddListener(cameraController.OnLook);
            map["ResetCamera"].AddListener(cameraController.OnResetCamera);
            map["Rotate"].AddListener(cameraController.OnRotate);
            map["Position"].AddListener(cameraController.OnPosition);
            map["Point"].AddListener(uiController.OnPoint);
            map["Press"].AddListener(uiController.OnPress);
            map["PressAnyKey"].AddListener(uiController.OnPressAnyKey);
        }