Update docs regarding Decimal scientific notation
- Include a link to the documentation section or the example
https://github.com/oracle/python-cx_Oracle/blob/master/doc/src/user_guide/sql_execution.rst#fetched-number-precision
- Describe the confusion
Python's decimal.Decimal defaults to scientific notation for representing very small float values. Since the docs suggest using Decimal for retaining float precision, I think it makes sense to also mention this aspect, because depending on the developer's usecase this may or may not be a problem.
Python 3.7.4 (default, Jul 9 2019, 18:13:23)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal
>>> print(Decimal('0.000001'))
0.000001
>>> print(Decimal('0.0000001'))
1E-7
>>> print("{:f}".format(Decimal('0.0000001')))
0.0000001
- Suggest changes that would help
Add a code snippet and explanation similar to the above to the corresponding documentation section.
I don't know why the Bug label was added, it's not a bug. It should be labeled in relation to Documentation Improvements.
I believe that has to do with display of the value and not specifically to do with how it is represented internally. We can add a note to the documentation, however, to clarify.
Sorry, yes, I actually meant to refer to the string representation.
Thanks, a note would be great!
Bit more background. When I extract data from a database (which wasn't Oracle up till recently) e.g. to load into other systems, I rarely fiddle with how data is written out, because Python and/or the DB client takes care of it. In this case using Decimal may set the developer up for an unexpected failure - but I can also see why it's a great solution to retain precision. I just noticed there was an attempt to use Decimal as default, which was reverted for the same reason :)