WebGLProgram is expecting a "mapping" property when generating environment mapping defines, as is seen here:
switch ( material.envMap.mapping ) {
case THREE.CubeReflectionMapping:
case THREE.CubeRefractionMapping:
envMapTypeDefine = 'ENVMAP_TYPE_CUBE';
break;
case THREE.EquirectangularReflectionMapping:
case THREE.EquirectangularRefractionMapping:
envMapTypeDefine = 'ENVMAP_TYPE_EQUIREC';
break;
case THREE.SphericalReflectionMapping:
envMapTypeDefine = 'ENVMAP_TYPE_SPHERE';
break;
}
switch ( material.envMap.mapping ) {
case THREE.CubeRefractionMapping:
case THREE.EquirectangularRefractionMapping:
envMapModeDefine = 'ENVMAP_MODE_REFRACTION';
break;
}
But "mapping" is only defined in CubeTexture and not on WebGLRenderTargetCube. When using dynamic environment mapping in a fashion similar to the example in the doc,
//Create cube camera
var cubeCamera = new THREE.CubeCamera( 1, 100000, 128 );
scene.add( cubeCamera );
//Create car
var chromeMaterial = new THREE.MeshLambertMaterial( { color: 0xffffff, envMap: cubeCamera.renderTarget } );
var car = new Mesh( carGeometry, chromeMaterial );
scene.add( car );
there is no way to let the shader use the environment map in refraction, except adding an extra property to the WebGLRenderTargetCube.
I am not sure if this is supposed to behave like this. Or should it be more consistent between using the static and dynamic environment map?
Just so I understand...
You're want to use THREE.RenderTarget as texture using, say, mapping = THREE.SphericalReflectionMapping?
@mrdoob It seems to me he wants to have a refractive material and use a WebGLRenderTargetCube as the environment map.
You can do that by modifying the webgl_materials_cubemap_dynamic2.html example like so:
var material = new THREE.MeshBasicMaterial( { envMap: cubeCamera.renderTarget } );
cubeCamera.renderTarget.mapping = THREE.CubeRefractionMapping; // <=== add this line
This is adding a mapping property to cubeCamera.renderTarget.
Related: Should we rename THREE.RenderTarget to THREE.RenderTexture and make it extend THREE.Texture?
@WestLangley That is exactly what I wanted to do.
I know I can achieve what I need by adding an extra property. It just seems to me that this property should be inherent to THREE.WebGLRenderTargetCube, which is most likely only used on environment mapping.
I am thinking of using static environment map and dynamic environment map in pretty much the same way by passing the reflection/refraction mode and texture type info (spherical/cube) into THREE.CubeCamera and in the constructor of THREE.CubeCamera, pass this info into THREE.WebGLRenderTargetCube to initialize the mapping property.
THREE.CubeCamera is already taking the texture size as the third component, so I guess it doesn't hurt to add an extra argument specifying how the cube texture rendered by the camera is going to be used.
There is a related design problem here. The format of the environment map has nothing to do with the material properties.
Instead of THREE.CubeReflectionMapping and THREE.CubeRefractionMapping, I think THREE.CubeMapping would be more appropriate.
In http://threejs.org/examples/webgl_materials_cubemap.html we load the same texture cube twice because we have both a reflective and refractive material.
In http://threejs.org/examples/webgl_materials_cubemap.html we load the same texture cube twice because we have both a reflective and refractive material.
That we'll solve by implementing THREE.Image.
For anyone else wanting to use a cubeCamera to create a refraction envMap, the latest way to do it is:
cubeCamera.renderTarget.texture.mapping = THREE.CubeRefractionMapping;
Related: Should we rename THREE.RenderTarget to THREE.RenderTexture and make it extend THREE.Texture?
I think this is worth considering, or perhaps even THREE.RenderToTexture. RenderTarget assumes a level of familiarity with technical terms that a lot of people don't have. RenderToTexture should be clear to everyone.
THREE.RenderToTexture.
I don't think so. That sounds like a method name.
Fair point. Perhaps THREE.TextureRenderer would be better?
Closing since the actual question of the OP was answered (see https://github.com/mrdoob/three.js/issues/7187#issuecomment-274980312). Since the discussion went a bit off-topic, I suggest to open a new issue if there is any interest in renaming THREE.WebGLRenderTarget (although I think the name is fine as it is).
That we'll solve by implementing THREE.Image.
The leading issue for this topic is #17766.
Most helpful comment
For anyone else wanting to use a cubeCamera to create a refraction envMap, the latest way to do it is:
cubeCamera.renderTarget.texture.mapping = THREE.CubeRefractionMapping;