Jekyll2024-04-15T14:16:09+00:00https://vicrucann.github.io/feed.xmlVictoria RudakovaProfessional blog and tutorials on using different tools for vision and graphics applicationsVictoria Rudakova[email protected]How to calculate angular (phase) average2018-10-25T00:00:00+00:002018-10-25T00:00:00+00:00https://vicrucann.github.io/tutorials/phase-averageProblem overview

Let’s assume we are dealing with a set of values which range wraps around, for example, angles where each angle \( \phi_i \in [0^\circ, 360^\circ) \). The problem occurs when we want to take an average value of such set. To clarify, for given two angles \( \phi_1=1^\circ \) and \( \phi_2=359^\circ \), the angular average is \( 0^\circ \) while the mathematical average is \( \phi_{avg} = \frac{\phi_1+\phi_2}{2} = \frac{1^\circ+359^\circ}{2}=180^\circ \).

In practice, I encoutered this problem while solving a calibration problem of a Time-of-Flight devicei when it was necessary to extract an average phase value from the measured phases of each sensor pixel. When the ToF device was placed at the limit range, the average seemed to be entirely wrong (mathematical average), so we had to come up with a proper formula for the angular average instead.

In this tuturial the formula will be derived, and a C++11-based code snippet will be provided.

Angular average fomula derivation

The solution principle lies in representation of each angle as a vector as it was a radius on a unit circle with the location defined by its angle. See sub-figure a below.

Vector representation and vector summation

The vector which is defined by the angle \( \phi_1 \) has projections on \(x\) and \(y\) axises. Since the vector forms a right triangle with the \(x\) axis, we can derive the projection values as:

\[ X_i = \cos\phi_i \] \[ Y_i = \sin\phi_i \]

Now, if given a second angle \(\phi_2\) which is represented as a vector (see sub-figure b above), we can use the vector representation in order to calculate the angular average. A simple vector summation will result is a vector with angle \(\phi_{12}\) which will be the average between the two angles as it can be seen on the sub-figure b.

\[X_{\sum} = \sum X_i \] \[Y_{\sum} = \sum Y_i \]

After the “average” vector is obtained, we can use its normalized projection values \(\tilde{X} = normalize(X_{\sum})\) and \(\tilde{Y} = normalize(Y_{\sum})\) in order to extract the angular average using tangent formula:

\[\phi_{avg} = \tan^{-1}(\frac{\tilde{Y}}{\tilde{X}}) \]

Code snippet

The following snippied requires C++11 support:

std::vector<float> X(N);
std::vector<float> Y(N);
std::vector<float> avgAlpha(N);
// vector<float> alpha = ...;
// X = sum(cos(alpha)), Y = sum(sin(alpha))
std::transform(alpha.begin(), alpha.end(), X.begin(), X.begin(),
    [](float alpha, float x0) -> float 
    {
        float xi = std::cos(alpha);
        return xi + x0; 
    } );
std::transform(alpha.begin(), alpha.end(), Y.begin(), Y.begin(),
    [](float p, float y0) -> float 
    {
        float yi = std::sin(alpha);
        return yi + y0; 
    } );
// normalize(x), normalize(y)
// alpha = atan2(y,x)
std::transform(X.begin(), X.end(), Y.begin(), avgPhase.begin(),
    [](float xi, float yi) -> float 
    {
        float length = std::sqrt(xi*xi + yi*yi);
        float unitX = xi/length;
        float unitY = yi/length;
        float alpha = std::atan2(unitY, unitX);
        return alpha<0? alpha+2*M_PI : alpha;
    } );
]]>
Victoria Rudakova[email protected]
Useful 3D geometry algorithms within a CAD application2017-03-15T00:00:00+00:002017-03-15T00:00:00+00:00https://vicrucann.github.io/tutorials/3d-geometry-algorithms

Overview

It’s been more than a year since I started working on my first small scale CAD-like application, which I had to develop from scratch. During that time I managed to accumulate a short list of the most useful 3D geometry formulas and implemented them as algorithms. Some of them are quite simple and straightforward, while others made me search and derive formulas. In this post I will present several basic algorithms that are very likely to be used within any CAD application.

For coding snippets I will be using OpenSceneGraph API. All the algorithms will be presented mathematically first, so you can use any other library or language in order to implement them. In most of the cases we will only be using operators such as multiplication, addition and subtraction of matrices, vectors and scalars.

For quick reference, these are some OpenSceneGraph matrix-vector operators:

  • float c = a*b is a dot product between two vectors a and b,
  • Vec3 f = a^b is a cross product,
  • Vec3 r = v * M is multiplication between a matrix M and vector v,
  • Vec3 r = v * s is scaling of vector v by value s.

OpenSceneGraph also offers an osg::Plane class which already contains some useful functions such as plane.intersect(ray) or plane.dotProductNormal(). For more info on the osg::Plane class, refer to the official documentation. The plane class will be useful in some occasions.

Note: The presented code snippets are provided for demonstration purpose and therefore are not necessary robust. It is strongly recommended to check every code snippet before usage in your code against all the corner cases.

Each presented algorithm will have the following format:

  • Application example - higher level functions where the algorithm could be applied.
  • Mathematical formula derivation.
  • Code snippet using OpenSceneGraph API.

The algorithms

Project point on a line (3D or 2D)

Application example: the point on line projection is a very low lever routine. An example of its usage is when we want to calculate the coordinates of intersection line segment of two rectangles in 3D (which is partial case of plane-plane intersection).

We define a line to be given by a parametric (point-and-vector) form:

\[l = P_0 + i\vec{u} \]

Project point on a line

Now if we want to project a custom point \( P_i \) onto line \(l\), we can do it by projecting a vector \(\vec{P_0P_1}\) onto the line \(l\). Then if we add the resulting vector to the point \(P_i\), we will obtain the projection result. The final formula is1:

\[ R = P_i + \frac{ (P_i-P_0) \cdot \vec{u}}{\vec{u}\cdot \vec{u}} \vec{u} \]

Using OpenSceneGraph, we can write the above formula as an algorithm:

/*! \param P0 is the global point on the line.
* \param u is the vector defining the line direction.
* \param Pi is the point to project.
* \return coordinates of the projected point. 
*/
osg::Vec3f projectPointOnLine(const osg::Vec3f& P0, 
                            const osg::Vec3f& u, 
                            const osg::Vec3f& Pi)
{
    return P0 + u * ((Pi-P0)*u)/(u*u);
}

Intersection between two planes (rectangles) in 3D

Application example: Find a line segment of two intersecting rectangles in 3D.

Two planes intersection

Assuming we are given two planes \(p_1\) and \(p_2\), we want to find a line \(l=P_0+i\vec{u}\) which is an intersection line of the planes (unless the planes are parallel, then no intersection exists). Each plane \(p_i\) (\(i=1,2\)), is given by a point \(C_i\) and a normal vector \(\vec{n_i}\).

Two planes are parallel whenever their normal vectors \(\vec{n_1}\) and \(\vec{n_2}\) are parallel, and this is equivalent to the condition: \(\vec{n_1}\times\vec{n_2}=0\). In the code we could introduce a very small value \(\sigma\) and compare to it in order to avoid division by close-to-zero value, i.e., two planes are parallel when \(\vec{n_1}\times\vec{n_2}<\sigma\).

Two planes intersect in a line which has direction vector \(\vec{u}=\vec{n_1}\times\vec{n_2}\) since \(\vec{u}\) is perpendicular to both \(\vec{n_1}\) and \(\vec{n_2}\), and thus is parallel to both planes as shown on the above figure.

Note: in order to avoid \(|\vec{u}|\) being small, we normalize it making it a unit direction vector.

After the direction vector \(\vec{u}\) is found, we still have to find a specific point \(P_0=(x_0, y_0, z_0)\) on it and which belongs to the both planes. We can do it by finding a solution to the plane equations, but there would be only two equations and the three unknown since the point \(P_0\) can lie anywhere on the line \(l\). For this case we need another constrain to solve for a specific \(P_0\). We will use Direct Linear Equation2.

The main idea is to find a non-zero coordinate of \(\vec{u}\) and set the corresponding coordinate of \(P_0\) to 0. Further, we choose the coordinate with the largest absolute value, as this will produce the most robust computations. As an example, suppose \(u_z\neq0\), then we set \(P_0=(x_0,y_0,0\) and it lies on \(l\). Now we have two equations:

\[a_1x_0+b_1y_0+d_1=0 \] \[ a_2x_0+b_2y_0+d_2=0 \]

the solution of which will produce coordinates \(x_0\) and \(y_0\).

Now let’s translate the above math into code using OpenSceneGraph:

/*! A method to calculate intersection line between two planes.
* \param n1 is normal of the first plane,
* \param C1 is arbitrary 3D point on the first plane,
* \param n2 is normal of the second plane,
* \param C2 is arbitrary point on the second plane,
* \param P0 is result point on the intersection line,
* \param u is result directional vector of the intersection line.
* \return 2 if intersection exists and was found.
*/
int getPlanesIntersection(const osg::Vec3f& n1, const osg::Vec3f& C1,
                                    const osg::Vec3f& n2, const osg::Vec3f& C2,
                                    osg::Vec3f& P0, osg::Vec3f& u)
{
    /* cross produc of normals */
    u = n1^n2;

    /* absolute values of u */
    float ax = (u.x() >= 0 ? u.x() : -u.x());
    float ay = (u.y() >= 0 ? u.y() : -u.y());
    float az = (u.z() >= 0 ? u.z() : -u.z());

    /* are two planes parallel? */
    if (std::fabs(ax+ay+az) < EPSILON) { 
        /* normals are near parallel */
        /* are they disjoint or coincide? */
        osg::Vec3f v = C2- C1;
        v.normalize();
        /* coincide */
        if (n1*v == 0) return 1;
        /* disjoint */
        else return 0;
    }

    /* canvases intersect in a line */
    int maxc; // coordinate dimention index
    if (ax > ay) {
        if (ax > az) maxc =  1;
        else maxc = 3;
    }
    else {
        if (ay > az) maxc =  2;
        else maxc = 3;
    }

    /* obtain a point on the intersection line:
     * zero the max coord, and solve for the other two */
    float d1 = -n1*C1; 
    float d2 = -n2*C2; // the constants in the 2 plane equations

    /* result coordinates */
    float xi, yi, zi;
    switch (maxc) {
    case 1:
        xi = 0;
        yi = (d2*n1.z() - d1*n2.z()) /  u.x();
        zi = (d1*n2.y() - d2*n1.y()) /  u.x();
        break;
    case 2:                     // intersect with y=0
        xi = (d1*n2.z() - d2*n1.z()) /  u.y();
        yi = 0;
        zi = (d2*n1.x() - d1*n2.x()) /  u.y();
        break;
    case 3:                     // intersect with z=0
        xi = (d2*n1.y() - d1*n2.y()) /  u.z();
        yi = (d1*n2.x() - d2*n1.x()) /  u.z();
        zi = 0;
    }
    P0 = osg::Vec3f(xi, yi, zi);
    return 2;
}

Intersection between plane and a ray/line

Application example: the most common example is ray casting at a virtual plane, i.e., when we want to get an intersection of the ray cast from mouse coordinates with certain plane on the scene, e.g., drawing a line on a plane.

A plane can be described by a set of points for which

\[(P-C)\vec{n}=0\]

Where \(\vec{n}\) is a normal vector to the plane, and \(C\) is an arbitrary point on the plane.

Two planes intersection

The line segment, or a ray, or a line is represented by two points in 3D - \(N\) and \(F\) (like if we are talking about near and far points). Our task is to find an intersection between the line segment \(NF\) and the plane.

Generally speaking, if we deal with a line (not line segment which is restricted by two points, or a ray which is restricted by one point), it will be either parallel to any plane in 3D, or intersect it at some point. We can check whether the line and plane are parallel by testing if \(\vec{n}\cdot\vec{FN}=0\), which means that the line direction vector \(\vec{FN}\) is perpendicular to the plane normal \(\vec{n}\). If this is true, there can be no intersection found since the line is parallel to the plane.

If the line and the plane are not parallel, they have an intersection point \(P\) which can be found as2:

\[x = \frac{(C-N)\cdot\vec{n}}{\vec{FN}\cdot\vec{n}} \] \[P = \vec{FN}x + N\]

When we convert the above formulas into OpenSceneGraph code, we can take advantage of the osg::Plane class and its implemented methods that check whether a specific ray intersects a plane, or located above/below the plane. Of course, if we want to calculate the intersection between a line and a plane, then that part must be omitted.

/*! A method to calculate an intersection between a plane and line or ray. 
* \param plane is the input plane,
* \param center is the arbitrary point that belongs to the plane,
* \param nearPoint is one of the two points that belongs to the input line/ray,
* \param farPoint is the second point that belongs to the input line/ray,
* \param P is the result point,
* \param isLine indicated whether we deal with line (true) or line segment (false) intersections.
* \return true if the intersection was found, false - otherwise. 
*/
bool getRayPlaneIntersection(const osg::Plane &plane, const osg::Vec3f &center, 
                             const osg::Vec3f &nearPoint, const osg::Vec3f &farPoint, 
                             osg::Vec3f &P, 
                             bool isLine)
{
    if (!plane.valid())
        return false;

    // if it is ray segment, check whether it intersects at all
    if (!isLine) {
        std::vector<osg::Vec3f> ray(2);
        ray[0] = nearPoint;
        ray[1] = farPoint;
        if (plane.intersect(ray)) { // 1 or -1 means no intersection
            std::cout << "Ray lies above or below the plane.\n";
            return false;
        }
    }

    osg::Vec3f dir = farPoint-nearPoint;
    // check if line is parallel
    if (!plane.dotProductNormal(dir)){
        std::cout << "The line is parallel to the plane.\n";
        return false;
    }

    // check if plane contains the line
    if (! plane.dotProductNormal(center-nearPoint)){
        std::cout << "Plane contains the line.\n";
        return false;
    }

    double x = plane.dotProductNormal(center-nearPoint) / plane.dotProductNormal(dir);
    P = dir * x + nearPoint;

    return true;
}

Skew lines geometry: shortest distance and projection of one skew line onto another

Application example: Dragging of a rectangle along its normal by using a ray cast from mouse position. The 3D ray cast and the rectangle’s normal are skew lines in 3d, and the new position of the rectangle is estimated as a 3D projection of one skew line onto another.

At one of my previous tutorials I already had referred to the geometry of skew lines when demonstrating how to improve line intersector. That time we demonstrated how to calculate the shortest distance between two skew lines. This algorithm is an extension of the shortest distance algorithm since it will allow calculation of the projection coordinates.

Assuming we are given a line/ray (e.g., result of the ray casting algorithm) \(l_1\) with a point \(P_1\). Another line is given by a line \(l_2\) and corresponding point \(P_2\).

Two planes intersection

Now we want to perform a projection of the line \(l_2\) onto \(l_1\), i.e., we want to calculate 3D coordinates of the point \(X_1\) (or inversely of \(X_2\), if needed).

Let \(\vec{d} = P_1 - P_2\) is the direction vector from \(R_1\) to \(X_1\). Let \(\vec{u_1}\) and \(\vec{u_2}\) be unit direction vectors for the given line segments. Then we can obtain an orthogonal to both \(l_1\) and \(l_2\): it can be found by cross product between \(\vec{u_1}\) and \(\vec{u_2}\), i.e., \(\vec{u_3} = \vec{u_1}\times \vec{u_2}\). Now if we project \(\vec{d}\) onto \(\vec{u_3}\) we will obtain a scalar result which is the shortest distance between the two skew lines3:

\[d = \frac{\vec{d}\cdot(\vec{u_1}\times\vec{u_2})}{|\vec{u_1} \times \vec{u2} |}\]

Note: since the skew lines are not parallel, \(\vec{u_1}\times\vec{u_2}\neq 0\).

We want to calculate the position of \(X_1\) and \(X_2\) - the closest points on the lines. Let \(\vec{k}=\vec{X_1X_2}\), and let \(r_i\) be the unique numbers such that \(X_i = P_i + r_i\vec{u_i}\). Given \(\vec{k}\) is orthogonal to both lines, taking the dot product of \(\vec{u_1}\cdot\vec{u_2}\) yields the system of linear equations3 (derivation omitted):

\[\vec{u_1}\cdot\vec{u_1}r_1 - \vec{u_1}\cdot\vec{u2}r_2 - \vec{u_1}\cdot\vec{d} = 0\] \[\vec{u_1}\cdot\vec{u_2}r_1 - \vec{u_2}\cdot\vec{u2}r_2 - \vec{u_2}\cdot\vec{d} = 0\]

for \(r_1\) and \(r_2\). From the above equations we can obtain \(X_i\). E.g., the derivation steps for the point \(X_1\) will be:

  1. Let \(a_1 = \vec{u_1}\cdot\vec{u_1}\), \(b_1 = \vec{u_1}\cdot\vec{u_2}\) and \(c_1 = \vec{u_1}\cdot\vec{d}\).
  2. Let \(a_2 = \vec{u_1}\cdot\vec{u_2}\), \(b_2 = \vec{u_2}\cdot\vec{u_2}\) and \(c_2 = \vec{u_2}\cdot\vec{d}\).
  3. Calculate \(r_1 = \frac{c_2-\dfrac{b_2c_1}{b_1}}{a_2-\dfrac{b_2a_1}{b_1}}\).
  4. Calculate \(X_1 = P_1 + r_1\vec{u_1}\).

Given the above formula, it is straightforward to implement both algorithms using OpenSceneGraph.

The shortest distance code:

/*! An algorithm to calculate the shortest distance between two skew lines. 
* \param P1 is the first point of the first line,
* \param P12 is the second point on the first line,
* \param P2 is the first point on the second line,
* \param P22 is the second point on the second line.
* \return the shortest distance
*/
double getSkewLinesDistance(const osg::Vec3d &P1, const osg::Vec3d &P12, 
                            const osg::Vec3d &P2, const osg::Vec3d &P22)
{
    osg::Vec3d u1 = P12-P1;
    osg::Vec3d u2 = P22-P2;
    osg::Vec3d u3 = u1^u2;
    if (u3.length() == 0) return 1;
    u3.normalize();
    osg::Vec3d dir = P1 - P2;
    return std::fabs((dir*u3)); // u3 is already normalized
}

The projection algorithm:

/*! A method to project one skew line onto another. 
* \param P1 is a first point that belonds to first skew line,
* \param P12 is the second point that belongs to first skew line,
* \param P2 is the first point that belongs to second skew line,
* \param P22 is the second point that belongs to second skew line,
* \param X1 is the result projection point of line P2P22 onto line P1P12. 
* \return true if such point exists, false - otherwise.
*/
bool getSkewLinesProjection(const osg::Vec3f &P1, const osg::Vec3f &P12, 
                            const osg::Vec3f &P2, const osg::Vec3f &P22, 
                            osg::Vec3f &X1)
{
    osg::Vec3f d = P2 - P1;
    osg::Vec3f u1 = P12-P1;
    u1.normalize();
    osg::Vec3f u2 = P22 - P2;
    u2.normalize();
    osg::Vec3f u3 = u1^u2;

    double EPSILON = 0.00001;
    if (std::fabs(u3.x())<=EPSILON && 
        std::fabs(u3.y())<=EPSILON && 
        std::fabs(u3.z())<=EPSILON){
        std::cout << "The rays are almost parallel.\n";
        return false;
    }

    // X1 and X2 are the closest points on lines
    // we want to find X1 (lies on u1)
    // solving the linear equation in r1 and r2: Xi = Pi + ri*ui
    // we are only interested in X1 so we only solve for r1.
    float a1 = u1*u1, b1 = u1*u2, c1 = u1*d;
    float a2 = u1*u2, b2 = u2*u2, c2 = u2*d;
    if (!(std::fabs(b1) > EPSILON)){
        std::cout << "Denominator is close to zero.\n";
        return false;
    }
    if (!(a2!=-1 && a2!=1)){
        std::cout << "Lines are parallel.\n";
        return false;
    }

    double r1 = (c2 - b2*c1/b1)/(a2-b2*a1/b1);
    X1 = P1 + u1*r1;

    return true;
}

Intersection between two lines in 3D using skew lines geometry

In 3D two lines are very unlikely to intersect. However, we can use skew lines geometry algorithms of the shortest distance and of the projection points calculation in order to easily extract a 3D intersection point of two lines in 3D. This algorithm could also be useful when we want to extract an approximate intersection point, i.e., lines do not strictly intersect, but within an Euclidean distance of very small value, which could occur due to computations. For this purpose, we can separate the algorithm into two parts: first, if there is a precise intersection, and second, if the intersection is approximate.

The steps of the algorithm are as follows:

  1. Treat the two lines as skew lines.
  2. Calculate the shortest distance between the two skew lines.
  3. If the distance is zero, the lines intersect precisely; find the intersection using dot and cross products.
  4. If the intersection is not precise, extract the intersection point as average between two projections of both of the skew lines.

As mentioned above, the formula for the precise intersection can be found using dot and cross product of vectors4. Let \(\alpha\) and \(\beta\) be two 3D lines which are given by points \(C\) and \(D\) and direction vectors \(\vec{e}\) and \(\vec{f}\) correspondingly.

Two planes intersection

Let \(\vec{g} = \vec{CD}\) be a direction vector from point \(D\) to point \(C\).

Note: if either \(\lvert \vec{f}\times\vec{g}\rvert\) or \(\lvert\vec{f}\times\vec{e}\rvert\) is zero, then the lines are parallel and have no intersection point.

In order to calculate the final intersection point \(P\), we have to derive scaling factor \(s\) which would equate \(P=C\pm s\vec{e}\), where the sign depedens on the directions of the vectors \(\vec{f}\) and \(\vec{e}\). The scaling factor iteself can be derived using length of cross products: \(s=\frac{\lvert\vec{f}\times\vec{g}\rvert}{\lvert\vec{f}\times\vec{e}\rvert}\). Using all together results in:

\[P=C\pm \frac{\lvert \vec{f}\times\vec{g} \rvert}{\lvert \vec{f}\times\vec{e} \rvert}\vec{e}\]

where the sign is defined: if \(\vec{f}\times\vec{g}\) and \(\vec{f}\times\vec{e}\) point in the same direction, the sign is \(+\), otherwise it is \(-\).

Now we can provide OpenSceneGraph-based implementation for two lines intersection in 3D:

/*! An algorithm to calculate an (approximate) intersection of two lines in 3D.
* \param La1 is the first point on the first line,
* \param La2 is the second point on the first line,
* \param Lb1 is the first point on the second line,
* \param Lb2 is the second point on the second line,
* \param intersection is the result intersection, of it can be found.
* \return true if the intersection can be found, false - otherwise.
*/
bool getLinesIntersection(const osg::Vec3f &La1, const osg::Vec3f &La2, 
                            const osg::Vec3f &Lb1, const osg::Vec3f &Lb2, 
                            osg::Vec3f &intersection)
{
    // first check if lines have an exact intersection point
    // do it by checking if the shortest distance is exactly 0
    float distance = getSkewLinesDistance(La1, La2, Lb1, Lb2);
    if (distance == 0.f){
        std::cout << "3d lines have exact intersection point\n";
        osg::Vec3f C = La2;
        osg::Vec3f D = Lb2;
        osg::Vec3f e = La1-La2;
        osg::Vec3f f = Lb1-Lb2;
        osg::Vec3f g = D-C;
        if ((f^g).length()==0 || (f^e).length()==0){
            std::cout << "Lines have no intersection, are they parallel?\n";
            return false;
        }

        osg::Vec3f fgn = f^g; 
        fgn.normalize();
        
        osg::Vec3f fen = f^e; 
        fen.normalize();
        
        int di = -1;
        if (fgn == fen) // same direction?
            di *= -1;

        intersection = C + e*di*( (f^g).length() / (f^e).length() );
        return true;
    }

    // try to calculate the approximate intersection point
    osg::Vec3f X1, X2;
    bool firstIsDone = getSkewLinesProjection(La1, La2, Lb1, Lb2, X1);
    bool secondIsDone = getSkewLinesProjection(Lb1, Lb2, La1, La2, X2);
    
    if (!firstIsDone || !secondIsDone){
        std::cout << "Could not obtain projection point.\n";
        return false;
    }

    intersection = (X1 + X2)/2.f;
    return true;
}

Afterword

Currently I’m trying to figure out what could be a small demo program that would demonstrate the usage of all the presented algorithms. In the future I might link this page to the coded demo, so stay tuned! Meanwhile, I was wondering if there could be any other 3D geometry algorithms to add to the list. So if you have ideas, do not hesitate to let me know in the comment section or by contacting me directly. It is always great to be able to improve the posts from user feedback.

]]>
Victoria Rudakova[email protected]
OpenSceneGraph intersectors example: line, point and virtual plane2017-01-26T00:00:00+00:002017-01-26T00:00:00+00:00https://vicrucann.github.io/tutorials/osg-intersectors-example

Goal

When I was building CAD-like program, one of the essential needs was different types of intersectors. The purpose was to be able to select different elements such as points, lines and drag them in a special manner, e.g., so that dragged rectangle point remains in the same plane as the rectangle.

The tiny demo program I created demonstrates how to use exactly the aforementioned intersectors. In short, I provide three types of intersectors:

  1. Line intersector - i.e., the user is able to select lines.
  2. Point intersector - i.e., the user is able to select points.
  3. Virtual plane intersector - i.e., the ray cast always intersects with a virtual plane to define a virtual intersection point.

This post’s main goal is to describe the created demo. All the parts of the demo are based on previous tutorials which will provide in depth details, if necessary.

Intersectors figure

Intersectors

Line intersector

Line intersection is based on finding the shortest distance between the raycast and all the line elements on the scene. For more details on how the line intersector works, feel free to check part 1 and part 2 of the corresponding tutorials.

Within the demo code, the line intersection is in action whenever you observe the wire’s color to turn magenta color.

Point intersector

The point intersection is based on finding the shortest distance between the raycast and all the point elements, and then finding of the distance is within a threshold. This part is heavily based on Chapter 10 of OpenSceneGraph Cookbook. In the article you can find all the necessary details on how the method works.

Within the demo code, the point intersection is in action whenever you observe the point’s color to turn bright green. That indicates the selected point is within the threshold distance from the ray casted from the camera view through the mouse position.

Virtual plane intersector

The virtual plane intersector is based on finding an intersection point between the ray that was cast by the mouse with a virtual plane in 3D. An example of application of such intersector could be drawing on plane in 3D. For the demo, we use it to assure the dragged point elements stay within the same plane as the rectangle. More precisely, the intersector helps to find a new position of the wire corner by finding an intersection between the mouse ray and the virtual plane of the rectangle.

Within the demo, the intersector is used whenever the mouse drags a point. In this case, it is indicated by yellow color.

]]>
Victoria Rudakova[email protected]
GLSL shader for fog imitation2016-11-28T00:00:00+00:002016-11-28T00:00:00+00:00https://vicrucann.github.io/tutorials/osg-shader-fogContext

The provided code snippet demonstrates the basic idea behind the fog imitation. The chosen fog model is a linear function which means the fog factor increases linearly with the distance from the current camera view. The fog effect is added through a fragment shader. The GLSL shaders are provided together with the OpenSceneGraph basic code that performs a simple polygon drawing. The shaders can be used with any other OpenGL-based code provided the necessary uniforms are passed to the shaders.

Simple scene

For the scene, we will add two polygons located in two different 3D planes. We will make them adjacent one to another so that to test occlusion colors:

vertices->push_back(osg::Vec3f(0,0,0));
vertices->push_back(osg::Vec3f(0,0,1));
vertices->push_back(osg::Vec3f(1,0,1));
vertices->push_back(osg::Vec3f(1,0,0));
vertices->push_back(osg::Vec3f(1,1,0));
vertices->push_back(osg::Vec3f(0.5,5,0));

We also provide different colors for each of the vertices and make sure the color biding is set to BIND_PER_VERTEX:

colors->push_back(osg::Vec4f(0.1, 0.9, 0.1, 1));
colors->push_back(osg::Vec4f(0.2, 0.1, 0.9, 1));
colors->push_back(osg::Vec4f(0.7, 0.9, 0.1, 1));
colors->push_back(osg::Vec4f(0.9, 0.2, 0.9, 1));
colors->push_back(osg::Vec4f(0.9, 0.2, 0.9, 1));
colors->push_back(osg::Vec4f(0.9, 0.9, 0.1, 1));

// ...
geom->setColorArray(colors, osg::Array::BIND_PER_VERTEX);

Necessary uniforms

Just like with one of the previous shaders for drawing lines in 3D, we will need ModelViewProjectionMatrix in order to obtain gl_Position. Using OpenSceneGraph, we frame the matrix as a callback which is updated whenever current camera position changes:

struct ModelViewProjectionMatrixCallback: public osg::Uniform::Callback
{
    ModelViewProjectionMatrixCallback(osg::Camera* camera) :
            _camera(camera) {
    }

    virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv) {
        osg::Matrixd viewMatrix = _camera->getViewMatrix();
        osg::Matrixd modelMatrix = osg::computeLocalToWorld(nv->getNodePath());
        osg::Matrixd modelViewProjectionMatrix = modelMatrix * viewMatrix * _camera->getProjectionMatrix();
        uniform->set(modelViewProjectionMatrix);
    }

    osg::Camera* _camera;
};

Another variable that we will need is the camera’s position in 3D space at any time. Once again, we use OpenSceneGraph’s callback system to extract camera’s eye and pass it as a uniform:

struct CameraEyeCallback: public osg::Uniform::Callback
{
    CameraEyeCallback(osg::Camera* camera) :
            _camera(camera) {
    }

    virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* /*nv*/) {
        osg::Vec3f eye, center, up;
        _camera->getViewMatrixAsLookAt(eye, center, up);
        osg::Vec4f eye_vec = osg::Vec4f(eye.x(), eye.y(), eye.z(), 1);
        uniform->set(eye_vec);
    }
    osg::Camera* _camera;
};

The both uniforms are added to a state set (osg::StateSet) variable of a root scene:

osg::Uniform* modelViewProjectionMatrix = new osg::Uniform(osg::Uniform::FLOAT_MAT4, "ModelViewProjectionMatrix");
    modelViewProjectionMatrix->setUpdateCallback(new ModelViewProjectionMatrixCallback(camera));
    state->addUniform(modelViewProjectionMatrix);

    osg::Uniform* cameraEye = new osg::Uniform(osg::Uniform::FLOAT_VEC4, "CameraEye");
    cameraEye->setUpdateCallback(new CameraEyeCallback(camera));
    state->addUniform(cameraEye);

GLSL Shaders

For our purpose, we will only need to use vertex and fragment shaders. The vertex shader sets up the correct gl_Position, and the fragment shader is where we will introduce color changes in order to imitate foggy environment.

Vertex shader

The vertex shader is the standard for GLSL version 3.3:

#version 330

uniform mat4 ModelViewProjectionMatrix;

layout(location = 0) in vec4 Vertex;
layout(location = 1) in vec4 Color;

out VertexData{
    vec4 mColor;
    vec4 mVertex;
} VertexOut;

void main(void)
{
    VertexOut.mColor = Color;
    VertexOut.mVertex = Vertex;
    gl_Position = ModelViewProjectionMatrix * Vertex;
}

As an output, we make sure to pass color and 3D coordinates of each vertex. The latter will be used when we will be calculating the distance between the camera’s eye and the vertex so that to assign fog color.

Fragment shader

We derive the fog color as a mix between the passed color of geometry and background color. In the fragment shader, we also have to pass FogColor as a uniform:

state->addUniform(new osg::Uniform("FogColor", FOG_COLOR));

Now we can calculate the fog color based on the fragment’s location in 3D space. First, we calculate the Euclidean distance \(E()\) between the camera eye \(C_e\) and a 3D vertex \(V_i\):

\[ d = E(C_e, V_i) \]

We use the distance and plug it into our linear function for a fog to get \(\alpha\) coefficient. The \(\alpha\) coefficient will then be used in GLSL’s mix(x, y, alpha) function in a manner \( x(1-\alpha) + y\alpha \). We derive \(\alpha\) by:

\[ \alpha = 1 - \frac{F_{max} - d}{F_{max} - F_{min}}\]

where \(F_{min}\) and \(F_{max}\) are the minimum and muximum distances within which the fog gradient exists. I.e., beyong the minimum distance the geometry or its part will be totally visible, while beyong the maximum distance the geometry will not be visible anymore.

For simplicity, we set up the fog thresholds inside the fragment shader; but they can also be passed as uniforms. The snippet for the fragment shader is thus:

#version 330

uniform vec4 CameraEye;
uniform vec4 FogColor;

in VertexData{
    vec4 mColor;
    vec4 mVertex;
} VertexIn;

float getFogFactor(float d)
{
    const float FogMax = 20.0;
    const float FogMin = 10.0;

    if (d>=FogMax) return 1;
    if (d<=FogMin) return 0;

    return 1 - (FogMax - d) / (FogMax - FogMin);
}

void main(void)
{
    vec4 V = VertexIn.mVertex;
    float d = distance(CameraEye, V);
    float alpha = getFogFactor(d);

    gl_FragColor = mix(VertexIn.mColor, FogColor, alpha);
}

Screenshots

Here are some screenshots of the result fog imitation:

Fog imitation

Note: the same scene is displayed but at different distances from the camera point (zoom-out operator performed).

Code snippet

Check a bit more complex example - shader-3dcurve. The fogging effect is incorporated into the fragment shader of curve shader.

]]>
Victoria Rudakova[email protected]
Qt3D minimal example using CMake2016-11-14T00:00:00+00:002016-11-14T00:00:00+00:00https://vicrucann.github.io/tutorials/qt3d-cmake

Overview

With Qt5.7 the Qt3D module is now a part of the Qt library, and it is a fully functional scene graph module. This post will provide a minimal example window and a scene graph (torus) by using CMake instead of QMake.

Note: the below example requires Qt5.7 version as a minimum Qt version.

CMAkeLists.txt file

cmake_minimum_required(VERSION 2.8.11)
project(qt3DExample)

set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_AUTOMOC ON)

# include necessary qt3d modules
find_package(Qt5 REQUIRED COMPONENTS Core Gui Widgets 3DCore 3DExtras 3DRender 3DInput)

set(SOURCES
    main.cpp
    )

add_executable(${PROJECT_NAME} ${SOURCES})

# link the qt3d libraries
target_link_libraries(${PROJECT_NAME}
    Qt5::Core
    Qt5::Gui
    Qt5::Widgets
    Qt5::3DCore
    Qt5::3DExtras
    Qt5::3DRender
    Qt5::3DInput
    )

main.cpp file

As a scene graph, we will place a single torus on the scene and will use some basic scene graph componenets connected to it: transform, material, geometry type. This example is a simplified version of the Qt 3D: Simple C++ Example.

#include <QGuiApplication>

#include <Qt3DCore/QEntity>
#include <Qt3DCore/QTransform>
#include <Qt3DCore/QAspectEngine>

#include <Qt3DRender/qrenderaspect.h>
#include <Qt3DRender/QCamera>
#include <Qt3DRender/QMaterial>

#include <Qt3DExtras/Qt3DWindow>
#include <Qt3DExtras/QTorusMesh>
#include <Qt3DExtras/QOrbitCameraController>
#include <Qt3DExtras/QPhongMaterial>

Qt3DCore::QEntity* createTestScene()
{
    Qt3DCore::QEntity* root = new Qt3DCore::QEntity;
    Qt3DCore::QEntity* torus = new Qt3DCore::QEntity(root);

    Qt3DExtras::QTorusMesh* mesh = new Qt3DExtras::QTorusMesh;
    mesh->setRadius(5);
    mesh->setMinorRadius(1);
    mesh->setRings(100);
    mesh->setSlices(20);

    Qt3DCore::QTransform* transform = new Qt3DCore::QTransform;
//    transform->setScale3D(QVector3D(1.5, 1, 0.5));
    transform->setRotation(QQuaternion::fromAxisAndAngle(QVector3D(1,0,0), 45.f ));

    Qt3DRender::QMaterial* material = new Qt3DExtras::QPhongMaterial(root);

    torus->addComponent(mesh);
    torus->addComponent(transform);
    torus->addComponent(material);

    return root;
}

int main(int argc, char* argv[])
{
    QGuiApplication app(argc, argv);
    Qt3DExtras::Qt3DWindow view;
    Qt3DCore::QEntity* scene = createTestScene();

    // camera
    Qt3DRender::QCamera *camera = view.camera();
    camera->lens()->setPerspectiveProjection(45.0f, 16.0f/9.0f, 0.1f, 1000.0f);
    camera->setPosition(QVector3D(0, 0, 40.0f));
    camera->setViewCenter(QVector3D(0, 0, 0));

    // manipulator
    Qt3DExtras::QOrbitCameraController* manipulator = new Qt3DExtras::QOrbitCameraController(scene);
    manipulator->setLinearSpeed(50.f);
    manipulator->setLookSpeed(180.f);
    manipulator->setCamera(camera);
    
    view.setRootEntity(scene);
    view.show();

    return app.exec();
}

Example screenshot

Qt 3D Torus example

]]>
Victoria Rudakova[email protected]
GLSL shader that draws a Bezier line given four control points2016-10-21T00:00:00+00:002016-10-21T00:00:00+00:00https://vicrucann.github.io/tutorials/bezier-shader

Context

This post is a continuation of one of the previous examples on how to draw thick and smooth lines in 3D space. Now we want to be able to not just draw a straight line, but a curve. As an example, the curve can be represented by a set of Bezier curves which were obtained by using a curve fitting algorithm. So the main purpose of this post is to provide an example code snippet of a GLSL shader that is being able to:

  1. Draw thick and smooth lines in 3D by turning GL_LINE_STRIP_ADJACENCY into triangular strip
  2. Sample the curve data from the given control points

In this post it will be shown how we can use the code from the point 1 and extend it to drawing curves in 3D instead of lines and polylines.

Drawing a Bezier curve

The Bezier curve is represented by two endpoints and two control points. Therefore, in order to pass to the shader a Bezier curve (or a set of the curves), we have to provide all the control points.

The modification of the given shader is straightforward given the cubic Bezier curve formula:

\[B(t)=(1-t)^3P_0+3(1-t)^2tP_1+3(1-t)t^2P_2+t^3P_3\]

Where \(P_i\) is one of the four control points for a given Bezier curve.

We incorporate the given formula into the functions to use inside our GLSL code:

vec4 toBezier(float delta, int i, vec4 P0, vec4 P1, vec4 P2, vec4 P3)
{
    float t = delta * float(i);
    float t2 = t * t;
    float one_minus_t = 1.0 - t;
    float one_minus_t2 = one_minus_t * one_minus_t;
    return (P0 * one_minus_t2 * one_minus_t + P1 * 3.0 * t * one_minus_t2 + P2 * 3.0 * t2 * one_minus_t + P3 * t2 * t);
}

Most of the code stays intact, with the exception of the loop part when we go through all the passed vertices. Now we haven to take into account how many segments there should be in the curve. The introduced variable nSegments can be passed to the shader by means of uniforms or set to constant values inside the shader. The pseudo-code of the loop of the main function will look like following:

for (int i=0; i<=nSegments; ++i){
    // Sample the control points to the curve points
    Points[i] = toBezier(delta, i, ...);

    // interpolate the colors
    colors[i] = ... ;

    // transform to the screen coordinate space
    points[i] = toScreenSpace(Points[i]);

    // extract z-values so that the drawing order remains correct
    zValues[i] = toZValue(Points[i]); 

    // finally send all the info for drawing procedure
    drawSegment(points, color, zValues);
}

A few words on the color interpolation. For that part the main idea is to define to which of the three Bezier fragment the current point belongs to, and then interpolate between the two colors of endpoints of that Bezier segment based on the location of the point to the endpoints of the segment. Refer to the source code for concrete example.

Codes

shader-3dcurve github repository contains the minimalistic OpenSceneGraph scene and the GLSL shaders that help to generate two curves located in different 3D planes:

image image
]]>
Victoria Rudakova[email protected]
Tiny library that performs curve fitting based on Schneider’s algorithm2016-09-23T00:00:00+00:002016-09-23T00:00:00+00:00https://vicrucann.github.io/tutorials/curve-fitting-c++
image image
Screenshots of drawn line (left) and curve fitted (right). Note how some details and noise are lost due to the larger fitting threshold.

Overview

For one of my applications, I’ve been searching for a lightweight C++-based library that would allow me to perform curve fitting. That is, given a set of points, fit those points to a set of adjacent curves. An example of application is when user is drawing on the screen, and the curve fitting helps to smoothen the output. Such algorithms are often used within the drawing and sketching applications. After I failed to find an easy-to-use and modern library, I gave up and ended up writing such “library” myself.

The algorithm of the library class is based on the Philip J. Schneider paper titled An algorithm for automatically fitting digitized curves which was published in Graphics gems in 1990. The main idea of the algorithm is given a set of points that belong to a single path, we try to iteratively fit a curve within given threshold. If it is not possible, the curve is split into parts and the process is repeated. For more details, refer to the original paper.

As often, the code can be found on my github account - CurveFitting.

Implementation details

The curve fitting code is a template class PathFitter which must be sub-classed in order to use the fitting algorithm. In the provided example, I used OpenSceneGraph library for visualization and also used OSG data types such as Vec3Array and Vec3f for the base class templates. The OSG vectors already provide basic vector functionality such as dot product, normalization and vector length. It is possible to provide any other custom non-OSG based class, but the aforementioned vector functionality must be implemented by the user.

To run an existing example using the OSG data types, we can use the OsgPathFitter class which is ready to be used, and provides an example of the subclassing the base class. More details on implementation and sub-classing are provided in the README of the project repository.

]]>
Victoria Rudakova[email protected]
How to draw thick and smooth 3D lines in OpenSceneGraph / OpenGL2016-08-26T00:00:00+00:002016-08-26T00:00:00+00:00https://vicrucann.github.io/tutorials/osg-shader-3dlines

Context

This tutorial is more expanded version of an answer on my stackoverflow question. To summarize the goal, we want to be able to draw lines in 3D that satisfy the next conditions:

  • There is no visible border between the adjacent lines in polyline, which occurs when we use the default OpenGL geometry mode such as GL_LINE_STRIP_ADJACENCY.
  • The lines have a 2D look which means the width of lines does not depend on the distance from the camera view. Think of a CAD application and how the lines have the same thickness no matter of their location of viewpoint.
  • Possibility to draw lines thicker than allowed default thickness. For example, when I was doing tests on my machine, I could not overcome the thickness of 10.f.

The default OpenGL drawing of line strip geometry does not allow to render smooth (non-broken) lines, as well as to render them of customary thicker than default width:

Line strip geometry polyline

Main principle

One of the ways to solve the problem is to represent each line segment as a set of triangles. Adjacent triangles (or quads) are drawn without any gaps between them. It is possible to draw those geometries by using GL_TRIANGLE_STRIP. In this case we have to deal with two problems:

  1. In 3D a set of triangles looks like a ribbon, i.e., it may look like a solid line under certain view, but the line look is lost when the view point is changed.
  2. The line width depends on the camera view.

To address the problem 1, we have to make sure the geometry is always facing the camera, i.e., recompute the geometries every time the viewpoint is changed. For the problem 2, the solution is similar - re-adjust the ribbon width with the change of viewport.

A very effective way to achieve the desired effect is to use GLSL shaders. Assuming the familiarity of the aforementioned programs, we will move directly to the implementation details.

Implementation details

The presented code is heavily based on the Cinder library discussion thread, and the main principle of triangle coordinates calculation is taken from there as well. In this part I will only provide some details on how to port the shader’s code into OpenSceneGraph program.

Shaders

Here we will provide brief description of each shader.

Vertex shader

The vertex shader is what helps to transform the 3D world coordinates into screen coordinates. Simply speaking, this is where we deal with lines being always faced towards the camera. In order to implement it, we have to use model-view-projection (MVP) matrix which is the matrix that is updated on every view change.

The calculation of each vertex is then done so:

gl_Position = ModelViewProjectionMatrix * Vertex;

Geometry shader

The geometry shader’s main goal is to take each line segment (which is represented by lines_adjacency) and turn it into a strip of triangles that have enough filling on each sides so that the consecutive line segment is connected without the gap. The position of each vertex of the triangle is calculated in relation towards the viewport of the widget which displays the whole scene. This allows the lines to have a constant thickness in spite of the their location in 3D world. Refer to the source code for more details on shader implementation.

Fragment shader

The fragment shader is a simple pass-through shader. It takes the incoming color and assigns it to each fragment:

gl_FragColor = VertexData.mColor;

For debugging purposes, I set up the color in the shader to green, so that to verify all the previous steps of shader program has completed successfully.

Callbacks

We need to provide two uniforms: for MVP matrix and the Viewport. When using OSG, the best way to do it is by using callbacks. In this case we need to derive from osg::Uniform::Callback. Below are the code snippets for each of the callbacks:

struct ModelViewProjectionMatrixCallback: public osg::Uniform::Callback
{
    ModelViewProjectionMatrixCallback(osg::Camera* camera) :
            _camera(camera) {
    }

    virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv) {
        osg::Matrixd viewMatrix = _camera->getViewMatrix();
        osg::Matrixd modelMatrix = osg::computeLocalToWorld(nv->getNodePath());
        osg::Matrixd modelViewProjectionMatrix = modelMatrix * viewMatrix * _camera->getProjectionMatrix();
        uniform->set(modelViewProjectionMatrix);
    }

    osg::Camera* _camera;
};

Of course, we need to pass the pointer on a camera that is attached to the viewer that displays the scene. In a similar way we define the callback for viewport:

struct ViewportCallback: public osg::Uniform::Callback
{
    ViewportCallback(osg::Camera* camera) :
            _camera(camera) {
    }

    virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* /*nv*/) {
        const osg::Viewport* viewport = _camera->getViewport();
        osg::Vec2f viewportVector = osg::Vec2f(viewport->width(), viewport->height());
        uniform->set(viewportVector);
    }

    osg::Camera* _camera;
};

Shader program

By following the OSG tutorials on how to set up and use shaders withing an OSG program, we create an osg::Program instance and attach to it the created shaders. Now given the set of vertices of type GL_LINES_ADJACENCY_EXT, we need to also set up the vertex and color attributes so that they are correctly used from withing the shaders. This is how it can be done in OpenSceneGraph:

geometry->setVertexAttribArray(0, vertices, osg::Array::BIND_PER_VERTEX);
geometry->setVertexAttribArray(1, colors, osg::Array::BIND_PER_VERTEX);

After we need to add the necessary uniforms, including the MVP matrix and viewport. And finally connect the shader program to the state set of the geometry.

Note: in order to avoid an aliased look of the shadered lines, we have to enable multi-sampling.

E.g.:

osg::DisplaySettings::instance()->setNumMultiSamples(4);

Results

Some screenshots of the result lines. The red color line is drawn by using OpenGL default GL_LINE_STRIP, while the greenish line is drawn by using the shader program. Note how the connection between the anchor point does not look broken compared to the red line. For this case we turned on the multi-sampling.

Smooth connection

The demonstration of ability to produce much thicker lines. Not only the connection is smoother, but the line width can be set to any value. For this test we turned off the multi-sampling, just to demonstrate the visual difference.

Thicker line

Another, more general example of two lines drawn by different methods, side by side:

General comparison

Codes

This tutorial had skipped many implementation details, that is why it is useful to refer to the source code for the fully functional example. Refer to the corresponding github repo. Note, the presented code includes some additional elements from 3D curves tutorial.

]]>
Victoria Rudakova[email protected]
Coverity and Travis CI integration set up for a project with Qt and OpenSceneGraph dependencies2016-08-12T00:00:00+00:002016-08-12T00:00:00+00:00https://vicrucann.github.io/tutorials/qtosg-coverity

Overview

This tutorial is a continuation of the part 1. Here I will show how to set up the compiles files from TravisCI for the analysis of Coverity Scan.

While the official TravisCI Integration guide provides all the necessary information on how to perform it step by step, I will concentrate only on the parts that needs special attention, or things that caused me some trouble.

To recap, these are the generic steps that needs to be followed for an initial set up:

  • Create a new github branch called coverity_scan which will be analyzed by Coverity whenever it is pushed on the gihub.
  • Create an account at https://scan.coverity.com by signing up using your github account.
  • Create file .travis.yml as was discussed in part 1 of the tutorial.
  • Merge the changes to the coverity_scane branch from master branch.
  • Paste the generated Coverity settings into .travis.yml files such as project settings, secure key, etc.

After doing the above steps, we are now ready to do final edits of the .travis.yml file.

Changes of .travis.yml

Most of the yml file will remain the same, and we only need to specify what are the build and pre-build commands of the Coverity.

For the pre-build part, we need to specified a compiler type and the version. It is so that to avoid a warning when no files are emitted for the Coverity analysis. For the build part, we use the make command - same way as when we do builds for TravisCI. As a result, this is how coverity_scan addon looks like:

addons:
  apt:
    packages:
      - cmake
      - g++-4.8

    coverity_scan:
      project:
        name: "vicrucann/QtOSG-hello"
        description: "Build submitted via Travis CI of Qt + OpenSceneGraph application"
      notification_email: [email protected]
      build_command_prepend: "cov-configure --comptype gcc --compiler gcc-4.8"
      build_command:   "make VERBOSE=1"
      branch_pattern: coverity_scan

The further steps of .travis.yml remain the same: before the install, installation and before script. For the script part, now that we send the files for Coverity scan with its own build command, we do not need to proceed. To avoid re-running the make command once again, we check for a git branch name, and if it is coverity_scan, we exit:

script:
  - if [[ "${COVERITY_SCAN_BRANCH}" == 1 ]];
      then
        echo "Don't build on coverty_scan branch.";
        exit 0;
    fi
  - make

Project example

I used the same QtOSG-hello example as in part 1 of the tutorial. Check the coverity_scan branch for the .travis.yml file. After I pushed my coverity_scan branch to the github, it caused Coverity Scan to perform the analysis.

Unfortunately, Coverity Scan does not allow scans for test projects, so I could not keep the project at my Coverity Scan account. As a proof of the concept, I only have this screenshot:

Coverity passed

One of my bigger projects rely on Coverity Scan for defect search, so I put a link for that project here too:

Coverity Status

]]>
Victoria Rudakova[email protected]
How to obtain OpenGL version from within OpenSceneGraph2016-08-05T00:00:00+00:002016-08-05T00:00:00+00:00https://vicrucann.github.io/tutorials/osg-version-opengl

Introduction

When writing a shader for one of my projects, I encountered a need to define an OpenGL version and whether GLSL is supported on the used machine. Assume, I have some geometry and there are two ways to render it: simplified version and fancy version. The simplified version uses some default primitive, e.g., GL_LINE_STRIP_ADJACENCY. The fancy version requires certain minimal OpenGL version so that to use the shaders I just wrote.

Since in OpenSceneGraph we normally do not deal with OpenGL commands directly, I had to find a way how to define the supported OpenGL versions through the OSG library.

Tester class for supported OpenGL version

The OSG examples already have an example that provides an idea how to request certain OpenGL constants. For instance, OsgShaderTerrain example. I took the derived class from osg::GraphicsOperation and modified it so that it returned the OpenGL version. Below is an example of how such tester class is implemented:

class TestSupportOperation : public osg::GraphicsOperation
{
public:
    TestSupportOperation()
        : osg::Referenced(true)
        , osg::GraphicsOperation("TestSupportOperation", false)
        , m_supported(true)
        , m_errorMsg()
        , m_version(0.0)
    {}

    virtual void operator() (osg::GraphicsContext* gc)
    {
        OpenThreads::ScopedLock<OpenThreads::Mutex> lock(m_mutex);
        osg::GLExtensions* gl2ext = gc->getState()->get<osg::GLExtensions>();

        if( gl2ext ){

            if( !gl2ext->isGlslSupported )
            {
            m_supported = false;
            m_errorMsg = "ERROR: GLSL not supported by OpenGL driver.";
            }
            else
                m_version = gl2ext->glVersion;
        }
        else{
            m_supported = false;
            m_errorMsg = "ERROR: GLSL not supported.";
        }
    }

    OpenThreads::Mutex  m_mutex;
    bool                m_supported;
    std::string         m_errorMsg;
    float               m_version;
};

Now, when using the TestSupportOperation class we can easily obtain the OpenGL version by refering to the class’ public variable: tester->m_version.

A very simplified case usage (empty scene) is presented below:

int main(int, char**)
{
    osgViewer::Viewer viewer;
    viewer.setUpViewInWindow(100,100,1024,960);

    // openGL version:
    osg::ref_ptr<TestSupportOperation> tester = new TestSupportOperation;
    viewer.setRealizeOperation(tester.get());
    viewer.realize();

    if (tester->m_supported)
        std::cout << "GLVersion=" << tester->m_version << std::endl;
    else
        std::cout << tester->m_errorMsg << std::endl;

    return viewer.run();
}
]]>
Victoria Rudakova[email protected]