Why are we using "volatile" keyword when checking rcond?

Someone wrote a question in a comment about code in dMatrix.cc which I would also like to understand. The code segment is:

      if (calc_cond)
          char norm = '1';
          uplo = 'U';
          dia = 'N';

          Array<double> z (dim_vector (3 * nc, 1));
          double *pz = z.fortran_vec ();
          Array<F77_INT> iz (dim_vector (nc, 1));
          F77_INT *piz = iz.fortran_vec ();

          F77_XFCN (dtrcon, DTRCON, (F77_CONST_CHAR_ARG2 (&norm, 1),
                                     F77_CONST_CHAR_ARG2 (&uplo, 1),
                                     F77_CONST_CHAR_ARG2 (&dia, 1),
                                     nr, tmp_data, nr, rcon,
                                     pz, piz, tmp_info
                                     F77_CHAR_ARG_LEN (1)
                                     F77_CHAR_ARG_LEN (1)
                                     F77_CHAR_ARG_LEN (1)));

          info = tmp_info;

          if (info != 0)
            info = -2;

          // FIXME: Why calculate this, rather than just compare to 0.0?
          volatile double rcond_plus_one = rcon + 1.0;

          if (rcond_plus_one == 1.0 || octave::math::isnan (rcon))
              info = -2;

              if (sing_handler)
                sing_handler (rcon);
                octave::warn_singular_matrix (rcon);

The variable rcon is initialized to 1.0 before entering this code block.

My initial guess is that the programmer is trying to signal to the C++ compiler that it can’t cache the value of rcon, which is being calculated in an external library. Is a mathematical operation rcon + 1.0 additionally required? Even if it is required, why not just add 0.0 to leave the value unchanged? Or is this yet another hack to force the compiler not to optimize A + 0 to A without performing the operation?

It seems that once this topic is settled we really need to add a documentation note about why this structure is used.

The check 1.0 == (foo + 1.0) is usually done only when abs(foo) can be smaller than the epsilon of the corresponding data type. It sounds like the equivalent could be " if (abs(foo) < eps)" but without calling the abs function. Not clear if it’s any faster though. [Edit: the check (foo + 1.0) == foo does not need one to code the value of eps or have to change it for different types. It’s copy-paste friendly.]

As to why volatile, your explanation sounds right.

Octave has an amazing long history :male_detective:

Joking @jwe , do you remember why you added this code August, 8th 1993, I think it was a Sunday and I was probably still in kindergarten at this time :sweat_smile:

  • In June 6th 1994 Matrix.cc was renamed to dMatrix.cc until today.
  • In July, 20th 1994 it was declared volatile.
  • In March, 31st 1995 tmp_rcond was renamed to rcond_plus_one and untouched, only moved through the Octave’s history ever since.

In the original context of 1994

-  if (rcond + 1.0 == 1.0)
+  volatile double tmp_rcond = rcond;
+  if (tmp_rcond + 1.0 == 1.0)

declaring tmp_rcond as volatile makes sense to me, as aggressive compiler optimizations (and probably in the early days of C++) might try to avoid the proper re-evaluation of the variable for every function call, and do it only once. But why it was declared with +1.0 is also not fully clear to me. The LAPACK documentation mentions nothing about it. Maybe comparisons to 0.0 where considered as “dangerous” at the time of writing? :thinking:

Indeed. I think @arungiridhar 's supposition that this code is designed to workaround condition numbers which are within eps of being singular is correct. The warning message that is issued later is matrix is singular to machine precision which is, technically, different from matrix is singular.

The history part is important because I think this would be coded in a different way in more modern C++ that has access to template code. Octave has dMatrix.cc for double-value matrices and fMatrix.cc for float-value matrices in separate files where the code is largely repeated. I think the code could be moved up a level to a generic Matrix.cc which then would use a comparison like

if (rcond < std::abs (std::numeric_limits<T>::epsilon ()))

which is clearer about what is happening.

But, that is a long-term project for the development branch. I have my immediate answer.

I can say that even for code from the early 90s, Octave was future-proofed enough that it still builds and runs today. I myself started coding C++ in 1996, not too long after Octave was started, well before ISO standardization and before STL. I recall there used to be a lot of “volatile register int ax” sort of declarations with weird optimizations and random sections of inline x86 assembler. Most of my code from that era will not even compile today, so I’m thankful that Octave stuck to standard patterns.

Yes, the test was intended to check whether the absolute value of rcond was smaller than eps. But at that time there was no standard way to get eps and the typical way of making this check (and the one shown in examples in the LINPACK manual, for example) was to write

IF (1.0 + RCOND .EQ. 1.0) GOTO ...

but then “smart” compilers might convert this to

IF (RCOND .NE. 0.0) GOTO ...

which is not the intent of the original expression. The volatile declaration avoids optimizations for expressions involving the variables declared volatile.

I agree with @rik that we could make better use of templates for this code now but that’s a long term and somewhat low-priority project.