Hi, We have got us a monster of a problem after updating our Production db servers to SQL 2005 sp2 64bit from SQL 2005 sp2 32bit.
The gist of the problem is that we receive the following error:
Receiving error: Msg 682, Level 22, State 148, Line 29
Internal error. Buffer provided to read column value is too small. Run DBCC CHECKDB to check for any corruption.
Executing t-sql update code across linked servers;
Code executing on sql 2005 sp2 cum7 x64
Updating table on sql 2000 sp4 x32
Copy of testing code:
SET balance = balance
FROM dbo.tblInvoiceSummary AS wis WITH (NOLOCK)
INNER JOIN dbo.tblInvoiceCharges AS wic WITH (NOLOCK)
ON wic.whse_code = wis.whse_code
AND wic.invoice_no = wis.invoice_no
INNER JOIN dbo.tblWarehouse AS w WITH (NOLOCK)
ON w.ups_user_no = wis.whse_code
WHERE (w.group_code = ‘xxxx’ or wis.whse_code = 929929)
AND wis.invoice_posted_batch_no Is Null
AND wis.customer_no not in (500,510)
AND IsNull(wis.invoice_voided_batch_no, 0) <> IsNull(wis.invoice_batch_no, 1)
AND custno = library.dbo.padnumber(cast(wis.customer_no as varchar(6)),6)
and id_col = 1000
All tables have a clustered index.
It is important to note that this error does not occur if there is no data returned, only if there is at least one or more records returned.
We have confirmed this error occurs on several different SQL 2005 sp2 64bit servers to several different SQL 2000 sp3 and sp4 servers. The error occurs on SQL 2005 sp2 64bit CumUpdate 0, CumUpdate 2 and Cumupdate 7.
We have opened a case with Microsoft and they have no idea. They are attempting to recreate the issue in their labs but we get the feeling that they have seen the error before and are uninterested in working on it due to SQL 2000. We have found some reference to this error in google searches, but all those were resolved by CumUpdate 2 but it has not resolved our issue.
We are stuck with these SQL 2000 servers as it is the backend for our corporate accounting package Sage AccPac Pro Series. Believe me we wish they were gone, but so it goes with third party restraints.
We are able to work around the issue by hand executing the updates, but as you can imagine this has taken our batch processing from minutes to 3+ hours.
If anyone has any ideas, man we could really use them.