go-mssqldb
go-mssqldb copied to clipboard
Support for 'money' type in bulk import
The money type was not supported for bulk insert. I took the simple way out and implemented it as an int64, so you need to pass an int64 (which fits very well for our usecase, where we have a Money type that wraps int64).
I figured out the permutation of bytes by looking at decodeMoney in types.go.
Appears the build is ok except for one SQL Server version that wasn't available. I don't know how to restart (if it is a transient problem) and assume I can't fix the CI pipeline..
The error in pipeline is
Start-Service : Service 'SQL Server (SQL2014) (MSSQL$SQL2014)' cannot be started due to the following error: Cannot start service MSSQL$SQL2014 on computer '.'
...so it doesn't seem to indicate an actual compatability problem, just a failed CI run..
Codecov Report
Merging #430 into master will increase coverage by
0.19%
. The diff coverage is82.14%
.
@@ Coverage Diff @@
## master #430 +/- ##
==========================================
+ Coverage 68.09% 68.28% +0.19%
==========================================
Files 21 21
Lines 5093 5121 +28
==========================================
+ Hits 3468 3497 +29
+ Misses 1412 1410 -2
- Partials 213 214 +1
Impacted Files | Coverage Δ | |
---|---|---|
bulkcopy.go | 57.83% <82.14%> (+1.52%) |
:arrow_up: |
mssql_go110.go | 69.23% <0%> (-3.19%) |
:arrow_down: |
mssql.go | 79.71% <0%> (-0.29%) |
:arrow_down: |
tds.go | 66.73% <0%> (ø) |
:arrow_up: |
types.go | 77.81% <0%> (+0.64%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 4e0d7dc...3c8d9ce. Read the comment docs.
Looks like it passed after retry. I will try to look at this CR today.
I think it would be pretty non intuitive to have implicit conversion from 12345 to 1.2345. I would like to hear other opinions on that.
I tried to support mssqldb.Money{...} but it isnt passed through database/sql to the driver.
Strings?...
I am thinking perhaps one needs another API than database/sql for bulk import, a way of somehow getting hold of the driver Conn from the *sql.Tx. But could not find it..
It is the same problem in the open numeric/decimal bulk insert PR.
Using float64 should at least be out of the question IMO. (I do not wish to use float64 for numeric either)
So either a) int64 or b) strings or c) do not go through database/sql?
I parse it as a string and repack it. using github.com/shopspring/decimal package.
case typeMoney, typeMoney4, typeMoneyN: // switch v := val.(type) { // case int: // value = float64(v) // case int8: // value = float64(v) // case int16: // value = float64(v) // case int32: // value = float64(v) // case int64: // value = float64(v) // case float32: // value = float64(v) // case float64: // value = v // case string: // value, err = strconv.ParseFloat(v, 64) // if err != nil { // err = fmt.Errorf("mssql: invalid string for money column: %s", v) // return // } // default: // err = fmt.Errorf("mssql: invalid type for money column: %s", v) // return // } var value float64 if col.ti.Size == 4 { res.ti.Size = 4 res.buffer = make([]byte, 4) var i32 uint32 switch v := val.(type) { case string: if strings.Contains(v, ".") { pointLoc := strings.Index(v, ".") intStr := strings.Split(v, ".") i64, _ := strconv.ParseInt((intStr[0] + intStr[1]), 10, 64) var i, e = big.NewInt(10), big.NewInt(int64((pointLoc))) i.Exp(i, e, nil) i64 = i64 * i.Int64() i32 = uint32(i64 * 10 * int64(pointLoc)) } } binary.LittleEndian.PutUint32(res.buffer, i32) } else if col.ti.Size == 8 { res.ti.Size = 8 res.buffer = make([]byte, 8) var high uint32 var low uint32 switch v := val.(type) { case int: value = float64(v) intValue := int64((value + 0.000000000000001) * 10000) high = uint32(intValue >> 32) low = uint32(intValue - int64(high)) case int8: value = float64(v) intValue := int64((value + 0.000000000000001) * 10000) high = uint32(intValue >> 32) low = uint32(intValue - int64(high)) case int16: value = float64(v) intValue := int64((value + 0.000000000000001) * 10000) high = uint32(intValue >> 32) low = uint32(intValue - int64(high)) case int32: value = float64(v) intValue := int64((value + 0.000000000000001) * 10000) high = uint32(intValue >> 32) low = uint32(intValue - int64(high)) case int64: value = float64(v) intValue := int64((value + 0.000000000000001) * 10000) high = uint32(intValue >> 32) low = uint32(intValue - int64(high))
case string:
var intValue int64
if strings.Contains(v, ".") {
intStr := strings.Split(v, ".")
bnoden := intStr[1]
if len(intStr[1]) > 4 {
intStr[1] = bnoden[0:4]
}
if len(intStr[1]) == 1 {
intStr[1] = intStr[1] + "0"
}
if len(intStr[1]) == 3 {
intStr[1] = intStr[1] + "0"
}
i64, _ := strconv.ParseInt((intStr[0] + intStr[1]), 10, 64)
if len(intStr[1]) == 2 {
intValue = i64 * 100
}
if len(intStr[1]) == 4 {
intValue = i64
}
high = uint32(intValue >> 32)
low = uint32(intValue - int64(high))
} else {
value, _ = strconv.ParseFloat(v, 64)
intValue := int64((value + 0.000000000000001) * 10000)
high = uint32(intValue >> 32)
low = uint32(intValue - int64(high))
}
}
binary.LittleEndian.PutUint32(res.buffer[0:4], high)
binary.LittleEndian.PutUint32(res.buffer[4:8], low)
} else {
err = fmt.Errorf("mssql: invalid size of money column")
}
I am super late on this review. I'll try to get to this tomorrow.
I like the code, I agree with the objection raised. I need to figure out decimals, period.
I made a proposal that would help this effort: https://github.com/golang/go/issues/30870
I would like to handle money/decimals by checking for either the *apd.Decimal type or the DecimalDecompose type.
I don't want to do a pre-multiplied int type.
I would have done that if I could figure out how. Seems like database/sql will only pass primitive types down to the driver; I couldn't find a way around that when I wrote this.
In go1.13 database/sql will pass down and accept the decimal decompose interface. I'd like to structure future decimal uses around that if possible.
What is the current status regarding decimal and database/sql? I tried to google a bit but wasn't easy to find the answer. Is it now possible for us to re-implement this in a more standard way and submit it upstream?
~To answer my own question: Nothing seems to be in the way now; database/sql will pass through values that implement Decompose/Compose. I will see about re-implementing this PR soonish then.~
No, I think I got that wrong. IIUC the support that is currently present in go may be reverted in the future since it's experimental and the proposal is not accepted?..