Created attachment 43399 [details] I found that the difference is always a multiple of a constant value and this multiple is between 0 and 63 Strange conversion on a time-stamp field imported from an AS/400 Database via ODBC. The same wrong conversion is in Calc and in Base. Other program that use the same ODBC (pe Excel) perform the correct conversion. ================================= create table aa6411 ( keyfield int not null primary key, ts timestamp not null, comment char(30) ) ================================== delete from aa6411 insert into aa6411 values( 1, '1990-03-02-08.30.00.100000', 'copied from V5R2 manual' ) insert into aa6411 values( 2, '2008-10-01-10.41.54.090000', 'danros66 got time 10:42' ) insert into aa6411 values( 3, '2008-10-01-10.41.52.305824', 'danros66 got time 10:51' ) insert into aa6411 values( 4, '2008-10-01-10.41.54.306208', 'danros66 got time 10:44' ) ================================== select * from aa6411 +------------+---------------------------+ | KEYFIELD | TS | +------------+---------------------------+ | 1 | 1990-03-02 08:30:00.100000| | 2 | 2008-10-01 10:41:54.090000| | 3 | 2008-10-01 10:41:52.305824| | 4 | 2008-10-01 10:41:54.306208| +------------+---------------------------+ ================================== Same select from Base KEYFIELD TS -------- -------------- 1 02/03/90 08:39 2 01/10/08 10:42 3 01/10/08 10:51 4 01/10/08 10:44 In Calc, I opened a new workbook, pressed F4, drilled down, and dragged the query into the worksheet, getting the same incorrect result.
There was an old OOo bug that miscalculated dates via ODBC in this way, perhaps it has come back. There are quite a few ODBC error reports now, with various different drivers, that used to work with previous versions of OOo, but no longer work with version 3.3.x of OOo / LibO. So far I have seen : Firebird IBM DB2 MySQL Omnis Tiger and another one whose name I can't remember right now. So why is it that they are all suddenly broken ? Alex
The conversion error, so far as I can recall from developments in the OOo project, is possibly down to the automatic type management of data types that was introduced during 3.3 development. I seem to recall that Base now uses Calc data types (at least partly) for deciding on how to interpret data. This may be what is causing the problem, in which case, it would have to be looked at by someone in the Calc module. Alex
I'm no expert here, but... http://cgit.freedesktop.org/libreoffice/core/tree/connectivity/source/inc/odbc/OTools.hxx#n178 util::DateTime to sql timestamp, the fractions in util::DateTime are in 100th of a second, they get multiplied by 10000000 (ODBC_FRACTION_UNITS_PER_HSECOND) which looks right, i.e. docs say that sql timestamps are in billionth of seconds, so that makes sense to me while http://cgit.freedesktop.org/libreoffice/core/tree/connectivity/source/drivers/odbcbase/OResultSet.cxx#n656 and http://cgit.freedesktop.org/libreoffice/core/tree/connectivity/source/drivers/odbcbase/ODatabaseMetaDataResultSet.cxx#n474 appear to be converting from sql timestamps to util::DateTime, and are multiplying the sql fractional seconds by 1000. I don't understand that.
Created attachment 50033 [details] I wonder if this makes a blind bit of difference ?
Created attachment 51813 [details] OResultSet::getTimestamp to get it as string first First, I have tried Caolán's patch from 2011-08-08 against both IBM i and PostgreSQL, and it appears to fix the problem. Second ... "Data Management: SQL Call Level Interface", X/Open Company Ltd., 1995. in section "4.8.4 Transfers and Conversions > Transfers with Conversion to/from String" <https://www2.opengroup.org/ogsys/publications/viewDocument.html?publicationid=11216&documentid=10812>, page 62, says C programs must specify SQL_CHAR, forcing string conversion, for SQL data types ... TIMESTAMP ... because there is no direct equivalent for DECIMAL and NUMERIC in C." I have made this work with the same two databases. I attach a patch for this, but I do not suggest that it would be good to do it this ugly way.
The error is apparent with an ODBC connection to a PostgreSQL table, so I removed reference to db2 from the summary.
lionel: I reckon you're our "database guy" :-), what do you think ? #5 suggest #4 works which sounds promising ? I *presume* the stuff in the SQL Call Level Interface spec is true for the basic CLI, but that we care here about ODBC which layers extra goodies on top of that stuff.
Lets take my simpler patch for now, so committed that as http://cgit.freedesktop.org/libreoffice/core/commit/?id=9bb187006a42ee8b0dcb6fe47407752c7873f41b and Lionel can sanity check it if he gets the time
Short version: Caolán's fix is the right one, at least for ODBC 3.x. Long version: Indeed, if one is restricted to use the X/Open SQL CLI, one has to go through string and parsing to get timestamp values. But the code discussed here is ODBC, and ODBC provides extensions to X/Open SQL CLI, among which a proper binary struct for dates, times and timestamps. See SQL_C_TYPE_TIMESTAMP in http://msdn.microsoft.com/en-us/library/ms714556%28VS.85%29.aspx (Microsoft Open Database Connectivity (ODBC) > ODBC Programmer's Reference > ODBC Appendixes > Appendix D: Data Types > C Data Types), and compare the table in section 4.8.4 of X/Open SQL CLI to http://msdn.microsoft.com/en-us/library/windows/desktop/ms712387%28v=vs.85%29.aspx (Microsoft Open Database Connectivity (ODBC) > ODBC Programmer's Reference > ODBC Appendixes > Appendix D: Data Types > Converting Data from SQL to C Data Types > SQL to C: Timestamp). ODBC provides a conversion from SQL_TYPE_TIMESTAMP to (among others) SQL_C_TYPE_TIMESTAMP, which is what the current code uses and which is IMHO the sanest choice. And indeed, as Caolán wrote, this is billionths of a second and needs to be converted to hundredths of a second, so multiplying it by 1000 makes no sense. This is confirmed by above mentioned ODBC specification for ODBC 3.x and SQL_C_TYPE_TIMESTAMP; I cannot find positive confirmation that it is the same for ODBC 2.x and SQL_C_TIMESTAMP, but my guess is that it is the same. Anyway, even if ODBC 2.x put there, say thousands or millionths of a second, multiplying by 1000 still makes no sense (it should have been division by 10 or 10000), so the code was wrong there in any case. This error seems to have been there since at least 2000, so I kinda wonder how it can have worked well before.