青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

永遠也不完美的程序

不斷學習,不斷實踐,不斷的重構……

常用鏈接

統計

積分與排名

好友鏈接

最新評論

Light Pre Pass in XNA: Basic Implementation

轉自:http://mquandt.com/blog/2009/12/light-pre-pass-in-xna-basic-implementation/

NOTE: This article is now obsolete. An up-to-date sample and article can be found at http://mquandt.com/blog/2010/03/light-pre-pass-round-2/

In this part I will cover how to implement the basic form of the Light Pre Pass renderer, with support for point lights, and the basic Blinn-Phong shader, including Albedo texture support.

As this article is fairly advanced in nature, I have to make certain assumptions about my audience, so that I do not spend half my time explaining basics. Firstly, you should have an understanding of basic concepts such as Cameras, Fullscreen Quads (including how to render one) and rendering a mesh with custom effects.

This pretty much means that as long as you have done some 3D work before, you should be fine. It would be best if you also knew XNA, as I will be using that to write this implementation, however as long as you can translate from C# and get the basic idea, that should be enough.

As you can see from these requirements, this article is not aimed at beginners, and if you are looking for tutorials on how to get started with XNA for 3D development, I would recommend you visit some great sites such as:

Those sites will help you get started with XNA, and once you are familiar and comfortable with the concepts behind 3D graphics, you can return here to learn an advanced renderer implementation.

My focus in this article will be on the implementation of the renderer, as a result, I will not be referring to the implementation of cameras or scene graphs.

Now that the housekeeping is out of the way, we can begin.

The Renderer in C#

Light Pre Pass (LPP), or Deferred Lighting, operates in 3 stages.

  1. Depth + Normals Rendering
  2. Light Rendering
  3. Materials Rendering

These 3 stages accumulate information into render targets, which are used by the next stage, until the Materials stage produces the final image. So the first thing we must do, is set up at least the following Render Targets:

  • Depth (SurfaceFormat.Single)
  • Normals (SurfaceFormat.Bgra1010102)
  • Lights (SurfaceFormat.Color)
1
2
3
4
depth = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Single, RenderTargetUsage.DiscardContents);
normals = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Bgra1010102, RenderTargetUsage.DiscardContents);
light = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.DiscardContents);
final = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.DiscardContents);

We use Bgra1010102 for storage of normals because we want maximum precision for the 3 channels we are using. The closest format that provides 32bits of depth and 3 channels is 1010102, where there are 10 bits for the 3 channels we care about, giving greater precision over the 8 bits in a normal A8R8G8B8 (or Color) surface format.

The Materials, or final pass can be rendered directly to the Backbuffer, or to a Render Target, this depends on your needs, and is completely up to you. I have provided suggested SurfaceFormats above, however you can feel free to use your own, however note that the shaders I provide may not work [correctly] with your chosen format.

Depth + Normals

The first stage of the renderer, requires you to render the Depth and Normal values for each pixel to the screen. You can optionally render the position directly, however many post processing techniques use depth information, so why not render it now to re-use later.

First we must setup the render targets on our device, easily done with two lines of code:

1
2
gfx.SetRenderTarget(0, depth);
gfx.SetRenderTarget(1, normals);

For those who have not worked with multiple render targets before, the number in the above code indicates the render target index, and allows you to un-set and resolve the render target later.

Now you must first clear the render targets. As we are using multiple render targets, a simple call to GraphicsDevice.Clear will not suffice, instead we simply render a fullscreen quad using a cheap shader to write out the clear colours to the render targets.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
struct VS_OUT
{
    float4 Position        : POSITION;
};
  
VS_OUT vs_main(float3 position : POSITION)
{
    VS_OUT output = (VS_OUT)0;
    output.Position = float4(position, 1);
  
    return output;
}
  
struct PS_OUT
{
    float4 Depth : COLOR0;
    float4 Normals : COLOR1;
};
  
PS_OUT ps_main()
{
    PS_OUT output = (PS_OUT)0;
  
    output.Depth = 1.0f;
  
    output.Normals = float4(0, 0, 0, 1);
  
    return output;
}

Next you render the objects, using a special shader that writes the Depth and Normals to the two render targets. If you intend to implement Normal Mapping, or a similar technique, this is where you would calculate and combine the Normals. For the purposes of this article, only the basic per-vertex normals will be stored here.

One thing I had to do, was ensure a couple of render states were set correctly, specifically DepthBufferEnable and DepthBufferWriteEnable. Ensure both of these are set to true before continuing.

The Depth and Normals shader is quite simple. First the object is transformed as it would normally be when rendering, and then the Z and W values from the transformed position are passed to the pixel shader, alongside the Normal.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
struct VS_IN
{
    float4 Position   : POSITION;
    float4 Normal     : NORMAL0;
};
  
struct VS_OUT
{
    float4 Position   : POSITION;
    float4 Depth      : TEXCOORD0;
    float4 Normal     : TEXCOORD1;
};
  
VS_OUT depthNorm_VS(VS_IN input)
{
    VS_OUT output = (VS_OUT)0;
  
    float4x4 wvp = mul(World, ViewProjection);
  
    output.Position = mul(input.Position, wvp);
  
    output.Depth.xy = output.Position.zw;
  
    output.Normal = mul(World, input.Normal);
  
    return output;
}

If you look at your render targets, you may see a white image for the depth buffer, this is normal, as the differences in depth between most points on an object are miniscule, and close to 1. Your normals buffer however should look something like this:

normals

Inside the pixel shader, the Z value is divided by the W value to get the depth, and that is written to the first render target. Then the Normal is normalised and shifted from a range of [-1, 1] to [0, 1].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
struct PS_OUT
{
    float4 Depth : COLOR0;
    float4 Normals : COLOR1;
};
  
PS_OUT depthNorm_PS(float4 depth : TEXCOORD0, float4 normal : TEXCOORD1)
{
    PS_OUT output = (PS_OUT)0;
  
    output.Depth = depth.x / depth.y;
  
    output.Normals.rgb = 0.5f * (normalize(normal) + 1.0f);
  
    // Set alpha for both Depth and Normals to 1 (for some reason its required)
    output.Depth.a = 1.0f;
    output.Normals.a = 1.0f;
  
    return output;
}

Now that we have our Depth and Normal values stored in the render targets, we can resolve and get their respective textures so that the lights can be rendered using this data. This is quite simple in XNA, just set the render targets on the graphics device to either another render target, or null. In this case, we can set RT0 to the light buffer, and set RT1 to null.

1
2
3
4
gfx.SetRenderTarget(0, light);
gfx.SetRenderTarget(1, null);
depthImage = depth.GetTexture();
normImage = normals.GetTexture();

Be sure to clear the light buffer to TransparentBlack, and then we can move on to rendering the lights.

In this first tutorial, I will implement point lights only. Check back for future tutorials about implementing other types of lights, like Directional Lights, etc.

Rendering the light stage is a little bit more complicated than the Depth + Normals stage. This time around, a number of Render States must be set in the beginning, and even more for each light based on the position of the camera.

Render States

The following render states must be set when drawing the lights, to take advantage of alpha blending for blending multiple overlapping lights.

1
2
3
4
5
6
7
gfx.RenderState.AlphaBlendEnable = true;
gfx.RenderState.SeparateAlphaBlendEnabled = false;
gfx.RenderState.AlphaBlendOperation = BlendFunction.Add;
gfx.RenderState.SourceBlend = Blend.One;
gfx.RenderState.DestinationBlend = Blend.One;
gfx.RenderState.DepthBufferEnable = false;
gfx.RenderState.DepthBufferWriteEnable = false;

Here we are disabling the Z-culling feature of the graphics card so that overlapping lights can be drawn, as well as enabling Alpha Blending over all channels of the render target so that the process of combining overlapping lights will be handled by hardware automatically. We also ensure that no modifications to the destination or source values are made during the blending stage, and that Additive blending is used. (Remember that lighting equations add multiple lights together)

Now you run through each light and set the CullMode render state based on where the camera frustum is located. If the frustum is inside or overlaps the light bounding volume (in this case a sphere), the CullMode needs to be set to CullClockwiseFace. CullCounterClockwiseFace should be set if the frustum is completely outside the light bounding volume. Remember to also ensure that the CullMode is set to CullCounterClockwiseFace after all of the lights have been rendered.

In the sample code, I use a Mesh to easily load and store the light volume, which for a Point Light, would be a sphere. A scaling matrix allows for the attenuation to be changed, so be sure to update any matrices as needed.

Some notes about the next code sample:

  • cmanager is my CameraManager, it is used here to set the ViewProjection and InverseViewProjection matrices, which are required to transform the Depth back into a position for lighting.
  • caller is the Renderer class, which coordinates rendering each stage, as well as setting up and resolving the appropriate buffers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
public void DrawLightDeferred(GraphicsDevice gfx, CameraManager cmanager, Renderer caller)
{
    shader.Begin();
  
    // Set Matrix params
    cmanager.ApplyCameraParameters(ref shader);
    shader.Parameters.GetParameterBySemantic("WORLD").SetValue(world);
  
    // Set Depth and Normals buffers
    shader.Parameters["Depth_Tex"].SetValue(caller.GetDepthImage());
    shader.Parameters["Normals_Tex"].SetValue(caller.GetNormalsImage());
  
    // Set lighting params
    shader.Parameters["LightPos"].SetValue(_pos);
    shader.Parameters["Attenuation"].SetValue(_attenuation);
    shader.Parameters["SpecPower"].SetValue(SpecularPower);
    shader.Parameters["LightColor"].SetValue(LightColor.ToVector4());
  
    for (int j = 0; j < lightMesh.Meshes.Count; j++)
    {
        gfx.Indices = lightMesh.Meshes[j].IndexBuffer;
  
        for (int k = 0; k < lightMesh.Meshes[j].MeshParts.Count; k++)
        {
            for (int i = 0; i < shader.CurrentTechnique.Passes.Count; i++)
            {
                EffectPass pass = shader.CurrentTechnique.Passes[i];
                pass.Begin();
  
                gfx.VertexDeclaration = lightMesh.Meshes[j].MeshParts[k].VertexDeclaration;
  
                gfx.Vertices[0].SetSource(lightMesh.Meshes[j].VertexBuffer,
                    lightMesh.Meshes[j].MeshParts[k].StreamOffset,
                    lightMesh.Meshes[j].MeshParts[k].VertexStride);
  
                gfx.DrawIndexedPrimitives(PrimitiveType.TriangleList,
                    lightMesh.Meshes[j].MeshParts[k].BaseVertex,
                    lightMesh.Meshes[j].MeshParts[k].StartIndex,
                    lightMesh.Meshes[j].MeshParts[k].NumVertices,
                    lightMesh.Meshes[j].MeshParts[k].StartIndex,
                    lightMesh.Meshes[j].MeshParts[k].PrimitiveCount);
  
                pass.End();
            }
        }
    }
    shader.End();
}

Now I need to run through some helper methods I use in the upcoming point light shader. These methods handle transforming a position from Post Projection space, to Screen space, as well as calculating the half pixel offset required by DX9.

1
2
3
4
5
6
7
8
9
10
float2 postProjToScreen(float4 position)
{
    float2 screenPos = position.xy / position.w;
    return (0.5f * (float2(screenPos.x, -screenPos.y) + 1));
}
  
float2 halfPixel()
{
    return -(0.5f / float2(fViewportWidth, fViewportHeight));
}

These are simple enough, and more importantly, *just work*.

Now for the point light shader. Here the light volume is transformed as needed in a really simple vertex shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
struct VS_OUT
{
    float4 Position            : POSITION;
    float4 LightPosition    : TEXCOORD0;
};
  
VS_OUT vs_main(float4 inPos : POSITION)
{
    VS_OUT output = (VS_OUT)0;
  
    float4x4 wvp = mul(World, ViewProjection);
  
    output.Position = mul(inPos, wvp);
    output.LightPosition = output.Position;
  
    return output;
}

The following variables are also passed to the shader for lighting calculations:

1
2
3
4
5
6
float3 LightPos;
float Attenuation;
float SpecPower;
float4 LightColor;
float3 CamPos : VIEWPOSITION;
float3 EyeDepthRay;

The key code comes in the pixel shader. The first thing needed is to transform the position of the pixel from post projection space to screen space. This is handled by the helper method I mentioned earlier. Then the half pixel offset is deducted from the screen space position, so that the values read from the Depth and Normal buffers are correct.

1
2
3
4
5
6
// Transform from post-projection to texcoords
float2 screenPos = postProjToScreen(projPos);
// DX9 half pixel offset
float2 texCoord = screenPos - halfPixel();
  
float depth = tex2D(depthSampler, texCoord);

Next, read the depth from the Depth buffer, and if the value is not less than 1, we simply write a value of 0 for this pixel, as there is no depth information at that point, and nothing to light. If there is however, the lighting can be calculated for that point.

1
2
3
4
5
6
7
8
// Reconstruct position from screen space + depth
float4 position;
position.x = texCoord.x * 2 - 1;
position.y = (1 - texCoord.y) * 2 - 1;
position.z = depth;
position.w = 1.0f;
position = mul(position, InvViewProjection);
position.xyz /= position.w;

For more information on how to reconstruct a position based on a depth value, read this. There are also alternative, and improved methods listed there, which can be used depending on your needs.

Next the normal is acquired from the normal buffer, and restored to the [-1, 1] range so that it can be correctly used in the lighting calculations.

1
2
3
// Restore Normal
float3 normal = tex2D(normSampler, texCoord);
normal = normalize(2.0f * normal - 1.0f);

Now the lighting can begin. There are two key elements that need to be calculated for our light buffer: N.L and Attenuation. N.L is the basic element in every lighting equation, and simply consists of the dot product between the Normal and the Light Direction.

Attenuation is calculated by simply determining the ratio of distance to light over maximum attenuation, this is then flipped so that 0 is the furthest point from the light. Here I also pre-combine the attenuation and the N.L value. You can of course combine these later when writing out the buffer, ultimately it gives the same result.

1
2
3
4
5
6
7
// Attenuation Calcs
float3 lDir = LightPos - position;
float atten = saturate(1 - dot(lDir/Attenuation, lDir/Attenuation));
lDir = normalize(lDir);
  
// N.L
float nl = dot(normal, lDir) * atten;

Next we calculate the specular value. As we are using the Blinn-Phong lighting equation later on, the Half Vector is used instead of the reflection Vector, which ends up being a cheaper calculation for us. (Negligible for most modern systems – but visual difference is imperceptible)

For the purposes of this article, I will only include the code from the Blinn-Phong variant, however in the downloadable sample, I provide both methods that can be toggled with a boolean. (Change the technique to change the method)

Remember that this only affects the specular value, so do not worry that this will restrict you to the Blinn-Phong (or just Phong) lighting model.

1
2
float3 halfDir = normalize(lDir + camDir);
spec = pow(saturate(dot(normal, halfDir)), SpecPower);

Finally we generate the buffer and this is where we combine the light colour with the calculated N.L and Attenuation values.

1
return float4(LightColor.r, LightColor.g, LightColor.b, spec) * nl;

You should get something that looks like this: (Note that due to transparency this may look weird, however the essential part to note is the lights making up the shape of the model)

lights

Now we are entering the home stretch. All that is left to render now is the materials for each object. This is simply a matter of rendering each object again, and using the Light buffer to shade the object. Here is also where the material-flexibility of LPP comes into play, as each object uses its own shader.

To prepare for this stage, simple resolve the light buffer by setting either the backbuffer (null) or a “Final Image” render target as RT0. Then you can get the light texture, and provide it so the objects can use it when rendering.

This is the pixel shader:

1
2
3
4
5
6
7
8
9
float2 scrCoord = postProjToScreen(input.ScrCoord) - halfPixel();
  
float4 light = tex2D(lightSampler, scrCoord);
  
float3 texCol = tex2D(texSampler, input.TexCoord);
  
float3 lighting = saturate(AmbientLight + (light.rgb * texCol) + light.aaa);
  
return float4(lighting, 1);

Here I adjust by the half pixel offset and transform from post projection to screen space inside the vertex shader, so those calculations are as before, however I pass the corrected Texture Coordinate to the pixel shader.

As this material is a Blinn-Phong material, it is a rather simple equation. The “Sum of light colour multiplied by N.L and attenuation” is handled by the Alpha Blending and light shaders, so that simply needs to be multiplied by the texture (Albedo) colour, which is then added to the ambient light term and specular term to complete the lighting equation.

Finally this is done, you now have either a backbuffer, or render target filled with a lit scene.

final

There are many other materials which can be adapted to use the light buffer, and there is also a modification that can be done to the light buffer and final material shaders to allow for a material specular value, however I will leave those to future articles.

I hope this has been informative, and if you have any questions, please post them in the comments. Also be sure to check back for new tutorials covering different light types, materials, and other additions. I hope to get shadows implemented into the system, and also outline combining this with a forward renderer to allow for transparent objects and particles.

The screenshots in this post use 1000 point lights arranged in a 10x10x10 cube around the model.

posted on 2010-08-15 10:01 狂爛球 閱讀(1214) 評論(0)  編輯 收藏 引用 所屬分類: 圖形編程

青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            欧美在线视频观看免费网站| 亚洲电影免费| 亚洲性图久久| 欧美日韩亚洲免费| 亚洲一区二区三区成人在线视频精品| 亚洲一区日韩| 一区在线播放视频| 欧美久久久久久| 一区二区三区欧美日韩| 欧美与欧洲交xxxx免费观看| 狠狠色综合一区二区| 欧美成人黑人xx视频免费观看| 国产精品视频精品视频| 欧美岛国在线观看| 亚洲自拍三区| 欧美一区二区三区免费在线看 | 免费观看久久久4p| 亚洲视频一区二区在线观看| 亚洲天堂网站在线观看视频| 黄色小说综合网站| 久久精品一区四区| 99国产一区| 亚洲国产综合在线| 欧美激情一区二区三区在线视频观看 | 欧美一区二区在线| 亚洲最黄网站| 欧美在线观看www| 一区二区三区av| 欧美一区二区在线免费观看| 欧美国产视频在线| 欧美不卡在线| 国产欧美精品在线观看| 国产精品av一区二区| 欧美激情一区| 韩国一区二区三区在线观看| 亚洲午夜在线观看视频在线| 亚洲国产福利在线| 亚洲黄网站黄| 亚洲国产精品激情在线观看| 亚洲一区二区精品| 欧美激情综合在线| 亚洲第一狼人社区| 久久免费视频在线观看| 久久久噜噜噜久久中文字幕色伊伊| 亚洲免费高清视频| 一本色道久久99精品综合 | 久久九九国产精品| 国产日韩精品在线| 黄色精品一区二区| 久久久久久尹人网香蕉| 亚洲欧美清纯在线制服| 亚洲欧美乱综合| 国产精品对白刺激久久久| 国产精品视频观看| 亚洲手机成人高清视频| 亚洲精品视频一区| 亚洲视频电影图片偷拍一区| 欧美大片国产精品| 99精品国产高清一区二区| 一本色道久久精品| 欧美在线免费| 亚洲一二三级电影| 欧美日韩高清一区| aaa亚洲精品一二三区| 欧美福利视频在线| 亚洲免费不卡| 午夜影院日韩| 久久视频免费观看| 葵司免费一区二区三区四区五区| 欧美大片专区| 免费在线视频一区| 亚洲免费观看视频| 国产精品99久久久久久有的能看 | 亚洲成人在线免费| 亚洲第一中文字幕| 欧美揉bbbbb揉bbbbb| 国产在线麻豆精品观看| 一区二区高清| 久久综合久久综合九色| 美国成人毛片| 国产精品自在线| 亚洲啪啪91| 性感少妇一区| 久久久av水蜜桃| 9久re热视频在线精品| 一区二区精品在线| 好吊一区二区三区| 日韩网站在线看片你懂的| 国产精品男gay被猛男狂揉视频| 亚洲精品在线免费| 亚洲无限乱码一二三四麻| 国产一区二区三区四区老人| 亚洲无毛电影| 久久精品国产综合| 99国产精品视频免费观看| 亚洲一区在线直播| 91久久久亚洲精品| 亚洲欧美日韩精品久久久久| 欧美日韩日日夜夜| 久久天天躁夜夜躁狠狠躁2022 | 亚洲高清资源| 国产人久久人人人人爽| 亚洲国产精品成人综合色在线婷婷| 欧美日韩综合在线免费观看| 狂野欧美一区| 国产精品人成在线观看免费| 欧美成人精品在线观看| 久久福利视频导航| 宅男噜噜噜66一区二区66| 国产伦理一区| 亚洲伦理在线免费看| 黄色成人av在线| 亚洲一区二区在线| 一本久久a久久精品亚洲| 久久人人爽人人爽爽久久| 国语自产精品视频在线看一大j8 | 亚洲性视频h| 久久漫画官网| 欧美在线免费观看| 国产精品二区二区三区| 亚洲国产精品欧美一二99| 好男人免费精品视频| 亚洲在线视频网站| 亚洲一区一卡| 欧美片在线观看| 亚洲国产天堂久久综合网| 亚洲国产欧美不卡在线观看| 欧美成人中文字幕| 国产亚洲精品bt天堂精选| 亚洲网站在线| 亚洲主播在线播放| 欧美日韩国产一区二区三区| 亚洲福利视频三区| 亚洲精品中文字幕在线观看| 麻豆国产va免费精品高清在线| 美女黄毛**国产精品啪啪| 欧美日韩一级大片网址| 亚洲区在线播放| 国产精品三级久久久久久电影| 亚洲精品老司机| 一本色道久久加勒比88综合| 欧美久久综合| 99精品国产热久久91蜜凸| 亚洲自拍偷拍色片视频| 国产精品久久久久久久浪潮网站| 一区二区三区欧美激情| 欧美一区二区播放| 国外成人网址| 欧美成人亚洲成人日韩成人| 亚洲欧洲精品一区| 亚洲调教视频在线观看| 国产精品国产三级国产| 亚洲欧美自拍偷拍| 蜜臀av在线播放一区二区三区| 亚洲黑丝一区二区| 欧美日韩999| 亚洲自拍高清| 国产精品一区在线观看| 久久成人免费视频| 亚洲福利一区| 欧美一区二区性| 在线精品一区| 欧美日韩在线亚洲一区蜜芽| 香蕉久久夜色精品国产| 正在播放欧美一区| 国产精品网站在线| 久久免费的精品国产v∧| 亚洲美女av网站| 久久久精品免费视频| 欧美视频中文一区二区三区在线观看| 一区二区三区视频在线播放| 久久精品成人欧美大片古装| 欧美日韩一区二区三区四区五区 | 亚洲在线一区二区三区| 另类专区欧美制服同性| 亚洲素人在线| 一区二区在线视频播放| 欧美日韩国产bt| 久久精品成人一区二区三区| 亚洲精品亚洲人成人网| 久久久亚洲高清| 一区二区91| 在线国产亚洲欧美| 亚洲私拍自拍| 91久久久久久久久| 国产欧美在线播放| 欧美日韩精品不卡| 麻豆精品视频在线| 欧美一区二区三区免费视| 亚洲精品免费在线播放| 免费日韩av| 久久精品亚洲一区| 亚洲免费一在线| 欧美日韩在线视频一区二区| 久久精品视频在线看| 亚洲一区精彩视频| 99精品热视频| 亚洲国产日日夜夜| 欧美电影在线| 久久免费视频在线观看|