Bug 32834 - bad result using mdeterm
Summary: bad result using mdeterm
Status: NEW
Alias: None
Product: LibreOffice
Classification: Unclassified
Component: Spreadsheet (show other bugs)
Version: 3.3.0 RC2
Hardware: All All
: low enhancement
Assignee: Not Assigned
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks: 32866
  Show dependency treegraph
 
Reported: 2011-01-04 14:57 UTC by profesw
Modified: 2013-01-26 17:56 UTC (History)
2 users (show)

See Also:
i915 platform:
i915 features:


Attachments

Description profesw 2011-01-04 14:57:27 UTC
When i put the matrix
1 2 3
4 5 6
7 8 9

and uses the MDETERM(a1:c3) the result should be 0. Instead is -9.51712667007776E-016

So, when i use the MINVERSA(a1:c3), calc shows an incorrect ansmwer.

I am using Ubuntu 10.10, Gateway centrino (32 bits), LibreOffice 3.3.0 
OOO330m18 (Build:4)
tag libreoffice-3.3.0.2, Ubuntu package 1:3.3.0~rc2-3maverick1
Comment 1 Rainer Bielefeld Retired 2011-01-04 21:49:12 UTC
More or less [Reproducible] with "LibreOffice 3.3.0 RC2 - WIN XP German UI  [OOO330m18 (build 3.3.0.2)]". I tried as per report and got result "6,66133814775094E-016", what differs from reporter's result. I have to believe that the result should be "0", and OOo 3.1.1 (on an other PC) also got "0".

With all cells = "1" I get the correct result "0" (I read in <http://www.openoffice.org/issues/show_bug.cgi?id=58177>)

Not a problem for too many users, but a wrong calculation is serious.
Comment 2 Rainer Bielefeld Retired 2011-01-04 22:08:27 UTC
Sorry, wrong OS in Comment 1, I tested with WIN7 64 bit
Comment 3 Kohei Yoshida (inactive) 2011-01-04 22:10:38 UTC
At least we are not doing any worse than Gnumeric and Excel 2007.  Calculations with numbers represented by binary floating points have tendency to be weak when the result is very close to zero.  This is due to the limitation with PC's FPU.  Any attempt to work around the hardware limitation will either be 1) very limited in coverage, or 2) severely affects performance.

Serious statistical software does computation using software doing the work of FPU, so it runs much slower (since it doesn't rely on the hardware to do the calculations) but is more accurate.  We do some attempt to cover some common cases, but we can't cover all cases, or the app would start to slow down significantly.

So, I have to set this to lower priority.
Comment 4 Kohei Yoshida (inactive) 2011-01-04 22:13:12 UTC
Perhaps Regina can shed some light on this, to see if there is anyway to improve numerical accuracy in this function, or not at all.
Comment 5 Regina Henschel 2011-01-05 04:07:34 UTC
The calculation is done via a LU decomposition. Inside this the elements are scaled. Therefore they get decimal places, which leads to the known problems. The problem is not solved, if you change to QR decomposition; I have tried it already. There exists something called "single value decomposition" which might be better, but I don't know enough about it. 

To get better results for integral elements, it would be possible to do a distinction of cases over the column or row count and calculate the determinant directly up to 3 (Sarrus) or perhaps 4 columns (4*Sarrus). But for more columns the number of needed determinants increases rapidly, so it would be a performance problem.

Calculating matrix inverse directly is simple for 2 columns (=1 determinant). For 3 columns you need already 10 determinants, which would be still possible. But again for more columns it is a performance problem.

Introducing such a distinction of cases would be an "Easy hacks" problem.

Another ansatz is to invalidate the result in such cases. Some thoughts on this you find in the comment "Possible checks for ill-condition:" in ScInterpreter::ScMatInv() in interpr5.cxx
Comment 6 Kohei Yoshida (inactive) 2011-01-05 05:41:24 UTC
Thanks Regina for your insight. :-)
Comment 7 Regina Henschel 2011-01-05 08:15:59 UTC
typo "singular value decomposition"


Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.