Monday, 9 November 2015

Double Versus Decimal in C#

Double is useful for scientific computations (such as computing spatial coordinates). Decimal is useful for financial computations and values that are “man-made” rather than the result of real-world measurements. Here’s a summary of the differences.
  

CategoryDoubleDecimal
Internal RepresentationBase 2 Base 10
Decimal Precision15-16 Significant Figures28-29 Significant figures
Range±(~10−324 to ~10308) ±(~10−28 to ~1028) 
Special Values+0, −0, +∞, −∞, and NaNNone 
SpeedNative to processor Non-native to processor
(about 10 times slower than double)

Most business applications should probably be using decimal rather than float or double. Our thumb rule should be manmade values such as currency are usually better represented with decimal floating point.
 
Hope this helps.
 
--
Happy Coding
Gopinath

 

No comments:

Post a Comment