Note: al3d module must be loaded by calling LoadLibrary("al3d") before using!
Members
# inner POLYTYPE
polytype definition.
Properties:
Name | Type | Description |
---|---|---|
POLYTYPE.FLAT |
*
|
A simple flat shaded polygon, taking the color from the `c' value of the first vertex. |
POLYTYPE.GCOL |
*
|
A single-color gouraud shaded polygon. The colors for each vertex are taken from the `c' value, and interpolated across the polygon. |
POLYTYPE.GRGB |
*
|
A gouraud shaded polygon which interpolates RGB triplets rather than a single color. |
POLYTYPE.ATEX |
*
|
An affine texture mapped polygon. This stretches the texture across the polygon with a simple 2d linear interpolation, which is fast but not mathematically correct. It can look OK if the polygon is fairly small or flat-on to the camera, but because it doesn't deal with perspective foreshortening, it can produce strange warping artifacts. |
POLYTYPE.PTEX |
*
|
A perspective-correct texture mapped polygon. This uses the `z' value from the vertex structure as well as the u/v coordinates, so textures are displayed correctly regardless of the angle they are viewed from. |
POLYTYPE.ATEX_MASK |
*
|
Like POLYTYPE_ATEX and POLYTYPE_PTEX, but zero texture map pixels are skipped, allowing parts of the texture map to be transparent. |
POLYTYPE.PTEX_MASK |
*
|
Like POLYTYPE_ATEX and POLYTYPE_PTEX, but zero texture map pixels are skipped, allowing parts of the texture map to be transparent. |
POLYTYPE.ATEX_LIT |
*
|
Like POLYTYPE_ATEX and POLYTYPE_PTEX, but the blender function is used to blend the texture with a light level taken from the `c' value in the vertex structure. |
POLYTYPE.PTEX_LIT |
*
|
Like POLYTYPE_ATEX and POLYTYPE_PTEX, but the blender function is used to blend the texture with a light level taken from the `c' value in the vertex structure. |
POLYTYPE.ATEX_MASK_LIT |
*
|
Like POLYTYPE_ATEX_LIT and POLYTYPE_PTEX_LIT, but zero texture map pixels are skipped, allowing parts of the texture map to be transparent. |
POLYTYPE.PTEX_MASK_LIT |
*
|
Like POLYTYPE_ATEX_LIT and POLYTYPE_PTEX_LIT, but zero texture map pixels are skipped, allowing parts of the texture map to be transparent. |
POLYTYPE.ATEX_TRANS |
*
|
Render translucent textures. All the general rules for drawing translucent things apply. |
POLYTYPE.PTEX_TRANS |
*
|
Render translucent textures. All the general rules for drawing translucent things apply. |
POLYTYPE.ATEX_MASK_TRANS |
*
|
Like POLYTYPE_ATEX_TRANS and POLYTYPE_PTEX_TRANS, but zero texture map pixels are skipped. |
POLYTYPE.PTEX_MASK_TRANS |
*
|
Like POLYTYPE_ATEX_TRANS and POLYTYPE_PTEX_TRANS, but zero texture map pixels are skipped. |
POLYTYPE.ZBUF |
*
|
OR this into POLYTYPE and the normal polygon3d(), polygon3d_f(), quad3d(), etc. functions will render z-buffered polygons. |
Methods
# inner ApplyMatrix(m, x, y, z) → {Array.<number>}
Multiplies the point (x, y, z) by the transformation matrix m.
Parameters:
Name | Type | Description |
---|---|---|
m |
Matrix
|
the matrix |
x |
*
|
x value or vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a new vector.
Array.<number>
# inner Clip3D(type, min_z, max_z, v) → {Array.<V3D>}
Clips the polygon given in `v'. The frustum (viewing volume) is defined by -z<x<z, -z<y<z, 0<min_z<z<max_z. If max_z<=min_z, the z<max_z clipping is not done. As you can see, clipping is done in the camera space, with perspective in mind, so this routine should be called after you apply the camera matrix, but before the perspective projection. The routine will correctly interpolate u, v, and c in the vertex structure. However, no provision is made for high/truecolor GCOL.
Parameters:
Name | Type | Description |
---|---|---|
type |
POLYTYPE
|
one of POLYTYPE. |
min_z |
number
|
minimum z value. |
max_z |
number
|
maximum z value. |
v |
Array.<V3D>
|
an array of vertices. |
an array of vertices.
Array.<V3D>
# inner CreateScene(nedge, npoly)
Allocates memory for a scene, nedge' and
npoly' are your estimates of how many edges and how many polygons you will render (you cannot get over the limit specified here).
Parameters:
Name | Type | Description |
---|---|---|
nedge |
number
|
max number of edges. |
npoly |
number
|
max number of polygons. |
# inner CrossProduct(x1, y1, z1, x2, y2, z2) → {Array.<number>}
Calculates the cross product (x1, y1, z1) x (x2, y2, z2). The cross product is perpendicular to both of the input vectors, so it can be used to generate polygon normals.
Parameters:
Name | Type | Description |
---|---|---|
x1 |
*
|
x value or first vector as array. |
y1 |
*
|
y value or second vector as array. |
z1 |
number
|
z value. |
x2 |
number
|
x value. |
y2 |
number
|
y value. |
z2 |
number
|
z value. |
a new vector.
Array.<number>
# inner DestroyScene()
Deallocate memory previously allocated by CreateScene. Use this to avoid memory leaks in your program.
# inner DotProduct(x1, y1, z1, x2, y2, z2) → {number}
Calculates the dot product (x1, y1, z1) . (x2, y2, z2), returning the result.
Parameters:
Name | Type | Description |
---|---|---|
x1 |
*
|
x value or first vector as array. |
y1 |
*
|
y value or second vector as array. |
z1 |
number
|
z value. |
x2 |
number
|
x value. |
y2 |
number
|
y value. |
z2 |
number
|
z value. |
dot product.
number
# inner GetAlignMatrix(xfront, yfront, zfront, xup, yup, zup)
Rotates a matrix so that it is aligned along the specified coordinate vectors (they need not be normalized or perpendicular, but the up and front must not be equal). A front vector of 0,0,-1 and up vector of 0,1,0 will return the identity matrix.
Parameters:
Name | Type | Description |
---|---|---|
xfront |
number
|
|
yfront |
number
|
|
zfront |
number
|
|
xup |
number
|
|
yup |
number
|
|
zup |
number
|
a Matrix.
# inner GetCameraMatrix(x, y, z, xfront, yfront, zfront, xup, yup, zup, fov, aspect) → {Matrix}
Constructs a camera matrix for translating world-space objects into a normalised view space, ready for the perspective projection. The x, y, and z parameters specify the camera position, xfront, yfront, and zfront are the 'in front' vector specifying which way the camera is facing (this can be any length: normalisation is not required), and xup, yup, and zup are the 'up' direction vector. The fov parameter specifies the field of view (ie. width of the camera focus) in radians. For typical projections, a field of view in the region 32-48 will work well. 64 (90°) applies no extra scaling - so something which is one unit away from the viewer will be directly scaled to the viewport. A bigger FOV moves you closer to the viewing plane, so more objects will appear. A smaller FOV moves you away from the viewing plane, which means you see a smaller part of the world. Finally, the aspect ratio is used to scale the Y dimensions of the image relative to the X axis, so you can use it to adjust the proportions of the output image (set it to 1 for no scaling - but keep in mind that the projection also performs scaling according to the viewport size). Typically, you will pass (float)w/(float)h, where w and h are the parameters you passed to set_projection_viewport.
Parameters:
Name | Type | Description |
---|---|---|
x |
number
|
x camera position. |
y |
number
|
y camera position. |
z |
number
|
y camera position. |
xfront |
number
|
x camera facing. |
yfront |
number
|
y camera facing. |
zfront |
number
|
z camera facing. |
xup |
number
|
x of 'up direction'. |
yup |
number
|
y of 'up direction'. |
zup |
number
|
z of 'up direction'. |
fov |
number
|
field of view in radians. |
aspect |
number
|
aspect ratio. |
# inner GetEmptyMatrix()
Use to create an empty matrix.
an empty matrix and empty translation Matrix.
# inner GetIdentityMatrix()
Return the identity matrix the 'do nothing' identity matrix. Multiplying by the identity matrix has no effect.
a Matrix.
# inner GetRotationMatrix(x, y, z)
Constructs a transformation matrix which will rotate points around all three axes by the specified amounts (given in radians). The direction of rotation can simply be found out with the right-hand rule: Point the dumb of your right hand towards the origin along the axis of rotation, and the fingers will curl in the positive direction of rotation. E.g. if you rotate around the y axis, and look at the scene from above, a positive angle will rotate in clockwise direction.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or a vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a Matrix.
# inner GetScalingMatrix(x, y, z)
Constructs a scaling matrix. When applied to the point (px, py, pz), this matrix will produce the point (pxx, pyy, pz*z). In other words, it stretches or shrinks things.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or a vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a Matrix.
# inner GetTransformationMatrix(scale, xrot, yrot, zrot, x, y, z)
Constructs a transformation matrix which will rotate points around all three axes by the specified amounts (given in radians), scale the result by the specified amount (pass 1 for no change of scale), and then translate to the requested x, y, z position.
Parameters:
Name | Type | Description |
---|---|---|
scale |
number
|
scaling value. |
xrot |
number
|
x-rotation value. |
yrot |
number
|
y-rotation value. |
zrot |
number
|
z-rotation value. |
x |
number
|
x value. |
y |
number
|
y value. |
z |
number
|
y value. |
a Matrix.
# inner GetTranslationMatrix(x, y, z)
Constructs a translation matrix. When applied to the point (px, py, pz), this matrix will produce the point (px+x, py+y, pz+z). In other words, it moves things sideways.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or a vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a Matrix.
# inner GetVectorRotationMatrix(x, y, z, a)
Constructs a transformation matrix which will rotate points around the specified x,y,z vector by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or a vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a |
number
|
rotation value. |
a Matrix.
# inner GetXRotateMatrix(r)
Construct X axis rotation matrices. When applied to a point, these matrices will rotate it about the X axis by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
r |
number
|
rotation in radians. |
a Matrix.
# inner GetYRotateMatrix(r)
Construct Y axis rotation matrices. When applied to a point, these matrices will rotate it about the Y axis by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
r |
number
|
rotation in radians. |
a Matrix.
# inner GetZRotateMatrix(r)
Construct Z axis rotation matrices. When applied to a point, these matrices will rotate it about the Z axis by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
r |
number
|
rotation in radians. |
a Matrix.
# inner MatrixMul(m1, m2)
Multiplies two matrices. The resulting matrix will have the same effect as the combination of m1 and m2, ie. when applied to a point p, (p * out) = ((p * m1) * m2). Any number of transformations can be concatenated in this way. Note that matrix multiplication is not commutative, ie. matrix_mul(m1, m2) != matrix_mul(m2, m1).
Parameters:
a new Matrix.
# inner NApplyMatrix(m, x, y, z) → {Array.<number>}
Multiplies the point (x, y, z) by the transformation matrix m.
Parameters:
Name | Type | Description |
---|---|---|
m |
Matrix
|
the matrix |
x |
*
|
x value or vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a new vector.
Array.<number>
# inner NGetRotationMatrix(x, y, z)
Constructs a transformation matrix which will rotate points around all three axes by the specified amounts (given in radians). The direction of rotation can simply be found out with the right-hand rule: Point the dumb of your right hand towards the origin along the axis of rotation, and the fingers will curl in the positive direction of rotation. E.g. if you rotate around the y axis, and look at the scene from above, a positive angle will rotate in clockwise direction.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or a vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a Matrix.
# inner NGetTransformationMatrix(scale, xrot, yrot, zrot, x, y, z)
Constructs a transformation matrix which will rotate points around all three axes by the specified amounts (given in radians), scale the result by the specified amount (pass 1 for no change of scale), and then translate to the requested x, y, z position.
Parameters:
Name | Type | Description |
---|---|---|
scale |
number
|
scaling value. |
xrot |
number
|
x-rotation value. |
yrot |
number
|
y-rotation value. |
zrot |
number
|
z-rotation value. |
x |
number
|
x value. |
y |
number
|
y value. |
z |
number
|
y value. |
a Matrix.
# inner NGetXRotateMatrix(r)
Construct X axis rotation matrices. When applied to a point, these matrices will rotate it about the X axis by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
r |
number
|
rotation in radians. |
a Matrix.
# inner NGetYRotateMatrix(r)
Construct Y axis rotation matrices. When applied to a point, these matrices will rotate it about the Y axis by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
r |
number
|
rotation in radians. |
a Matrix.
# inner NGetZRotateMatrix(r)
Construct Z axis rotation matrices. When applied to a point, these matrices will rotate it about the Z axis by the specified angle (given in radians).
Parameters:
Name | Type | Description |
---|---|---|
r |
number
|
rotation in radians. |
a Matrix.
# inner NMatrixMul(m1, m2)
Multiplies two matrices. The resulting matrix will have the same effect as the combination of m1 and m2, ie. when applied to a point p, (p * out) = ((p * m1) * m2). Any number of transformations can be concatenated in this way. Note that matrix multiplication is not commutative, ie. matrix_mul(m1, m2) != matrix_mul(m2, m1).
Parameters:
a new Matrix.
# inner NormalizeVector(x, y, z) → {Array.<number>}
Converts the vector (x, y, z) to a unit vector. This points in the same direction as the original vector, but has a length of one.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
a new vector.
Array.<number>
# inner NPolygonZNormal(v1, v2, v3) → {number}
Finds the Z component of the normal vector to the specified three vertices (which must be part of a convex polygon). This is used mainly in back-face culling. The back-faces of closed polyhedra are never visible to the viewer, therefore they never need to be drawn. This can cull on average half the polygons from a scene. If the normal is negative the polygon can safely be culled. If it is zero, the polygon is perpendicular to the screen. However, this method of culling back-faces must only be used once the X and Y coordinates have been projected into screen space using PerspProject() (or if an orthographic (isometric) projection is being used). Note that this function will fail if the three vertices are co-linear (they lie on the same line) in 3D space.
Parameters:
Name | Type | Description |
---|---|---|
v1 |
Array.<number>
|
first vector. |
v2 |
Array.<number>
|
second vector. |
v3 |
Array.<number>
|
third vector. |
z component.
number
# inner PerspProject(x, y, z)
Projects the 3d point (x, y, z) into 2d screen space and using the scaling parameters previously set by calling SetProjectionViewport(). This function projects from the normalized viewing pyramid, which has a camera at the origin and facing along the positive z axis. The x axis runs left/right, y runs up/down, and z increases with depth into the screen. The camera has a 90 degree field of view, ie. points on the planes x=z and -x=z will map onto the left and right edges of the screen, and the planes y=z and -y=z map to the top and bottom of the screen. If you want a different field of view or camera location, you should transform all your objects with an appropriate viewing matrix, eg. to get the effect of panning the camera 10 degrees to the left, rotate all your objects 10 degrees to the right.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
# inner Polygon3D(type, texture, v)
Draw 3d polygons using the specified rendering mode. Unlike the regular polygon() function, these routines don't support concave or self-intersecting shapes. The width and height of the texture bitmap must be powers of two, but can be different, eg. a 64x16 texture is fine
How the vertex data is used depends on the rendering mode:
The x' and
y' values specify the position of the vertex in 2d screen coordinates.
The z' value is only required when doing perspective correct texture mapping, and specifies the depth of the point in 3d world coordinates. <br/><br/> The
u' and `v' coordinates are only required when doing texture mapping, and specify a point on the texture plane to be mapped on to this vertex. The texture plane is an infinite plane with the texture bitmap tiled across it. Each vertex in the polygon has a corresponding vertex on the texture plane, and the image of the resulting polygon in the texture plane will be mapped on to the polygon on the screen.
We refer to pixels in the texture plane as texels. Each texel is a block, not just a point, and whole numbers for u and v refer to the top-left corner of a texel. This has a few implications. If you want to draw a rectangular polygon and map a texture sized 32x32 on to it, you would use the texture coordinates (0,0), (0,32), (32,32) and (32,0), assuming the vertices are specified in anticlockwise order. The texture will then be mapped perfectly on to the polygon. However, note that when we set u=32, the last column of texels seen on the screen is the one at u=31, and the same goes for v. This is because the coordinates refer to the top-left corner of the texels. In effect, texture coordinates at the right and bottom on the texture plane are exclusive.
There is another interesting point here. If you have two polygons side by side sharing two vertices (like the two parts of folded piece of cardboard), and you want to map a texture across them seamlessly, the values of u and v on the vertices at the join will be the same for both polygons. For example, if they are both rectangular, one polygon may use (0,0), (0,32), (32,32) and (32,0), and the other may use (32,0), (32,32), (64,32), (64,0). This would create a seamless join.
Of course you can specify fractional numbers for u and v to indicate a point part-way across a texel. In addition, since the texture plane is infinite, you can specify larger values than the size of the texture. This can be used to tile the texture several times across the polygon.
Parameters:
Name | Type | Description |
---|---|---|
type |
POLYTYPE
|
one of POLYTYPE. |
texture |
Bitmap
|
texture Bitmap. |
v |
Array.<V3D>
|
an array of vertices. |
# inner PolygonZNormal(v1, v2, v3) → {number}
Finds the Z component of the normal vector to the specified three vertices (which must be part of a convex polygon). This is used mainly in back-face culling. The back-faces of closed polyhedra are never visible to the viewer, therefore they never need to be drawn. This can cull on average half the polygons from a scene. If the normal is negative the polygon can safely be culled. If it is zero, the polygon is perpendicular to the screen. However, this method of culling back-faces must only be used once the X and Y coordinates have been projected into screen space using PerspProject() (or if an orthographic (isometric) projection is being used). Note that this function will fail if the three vertices are co-linear (they lie on the same line) in 3D space.
Parameters:
Name | Type | Description |
---|---|---|
v1 |
Array.<number>
|
first vector. |
v2 |
Array.<number>
|
second vector. |
v3 |
Array.<number>
|
third vector. |
z component.
number
# inner QScaleMatrix(m, scale)
Optimised routine for scaling an already generated matrix: this simply adds in the scale factor, so there is no need to build two temporary matrices and then multiply them together.
Parameters:
Name | Type | Description |
---|---|---|
m |
number
|
a Matrix. |
scale |
number
|
scale factor |
# inner QTranslateMatrix(m, x, y, z)
Optimised routine for translating an already generated matrix: this simply adds in the translation offset, so there is no need to build two temporary matrices and then multiply them together.
Parameters:
Name | Type | Description |
---|---|---|
m |
number
|
a Matrix. |
x |
number
|
x-offset or a vector as array. |
y |
number
|
y-offset. |
z |
number
|
z-offset. |
# inner Quad3D(type, texture, v1, v2, v3, v4)
Draw 3d quads using vertex.
Parameters:
Name | Type | Description |
---|---|---|
type |
POLYTYPE
|
one of POLYTYPE. |
texture |
Bitmap
|
texture Bitmap. |
v1 |
V3D
|
a vertex. |
v2 |
V3D
|
a vertex. |
v3 |
V3D
|
a vertex. |
v4 |
V3D
|
a vertex. |
# inner RenderScene()
Renders all the specified ScenePolygon3D()'s on the current bitmap. Rendering is done one scanline at a time, with no pixel being processed more than once.
Note that between ClearScene() and RenderScene() you shouldn't change the clip rectangle of the destination bitmap. For speed reasons, you should set the clip rectangle to the minimum.
# inner ScenePolygon3D(type, texture, vtx)
Puts a polygon in the rendering list. Nothing is really rendered at this moment. Should be called between ClearScene() and RenderScene().
Arguments are the same as for Polygon3D().
Unlike Polygon3D(), the polygon may be concave or self-intersecting. Shapes that penetrate one another may look OK, but they are not really handled by this code.
Parameters:
Name | Type | Description |
---|---|---|
type |
*
|
|
texture |
*
|
|
vtx |
*
|
# inner SetProjectionViewport()
Sets the viewport used to scale the output of the PerspProject() function. Pass the dimensions of the screen area you want to draw onto, which will typically be 0, 0, SizeX(), and SizeY(). Also don't forget to pass an appropriate aspect ratio to GetCameraMatrix() later. The width and height you specify here will determine how big your viewport is in 3d space. So if an object in your 3D space is w units wide, it will fill the complete screen when you run into it (i.e., if it has a distance of 1.0 after the camera matrix was applied. The fov and aspect-ratio parameters to get_camera_matrix also apply some scaling though, so this isn't always completely true). If you pass -1/-1/2/2 as parameters, no extra scaling will be performed by the projection.
# inner SetSceneGap(gap)
This number (default value = 100.0) controls the behaviour of the z-sorting algorithm. When an edge is very close to another's polygon plane, there is an interval of uncertainty in which you cannot tell which object is visible (which z is smaller). This is due to cumulative numerical errors for edges that have undergone a lot of transformations and interpolations.
The default value means that if the 1/z values (in projected space) differ by only 1/100 (one percent), they are considered to be equal and the x-slopes of the planes are used to find out which plane is getting closer when we move to the right.
Larger values means narrower margins, and increasing the chance of missing true adjacent edges/planes. Smaller values means larger margins, and increasing the chance of mistaking close polygons for adjacent ones. The value of 100 is close to the optimum. However, the optimum shifts slightly with resolution, and may be application-dependent. It is here for you to fine-tune.
Parameters:
Name | Type | Description |
---|---|---|
gap |
number
|
gap value. |
# inner Triangle3D(type, texture, v1, v2, v3)
Draw 3d triangles, using vertices.
Parameters:
Name | Type | Description |
---|---|---|
type |
POLYTYPE
|
one of POLYTYPE. |
texture |
Bitmap
|
texture Bitmap. |
v1 |
V3D
|
a vertex. |
v2 |
V3D
|
a vertex. |
v3 |
V3D
|
a vertex. |
# inner VectorLength(x, y, z) → {number}
Calculates the length of the vector (x, y, z), using that good 'ole Pythagoras theorem.
Parameters:
Name | Type | Description |
---|---|---|
x |
*
|
x value or vector as array. |
y |
number
|
y value. |
z |
number
|
y value. |
vector length.
number