nupic.core-legacy icon indicating copy to clipboard operation
nupic.core-legacy copied to clipboard

Change default dtypes to 32bit

Open scottpurdy opened this issue 10 years ago • 4 comments

Many locations use Real, Int, and UInt types which currently default to 64 bits. But the size can also be 32 bits and is determined by the preprocessor variables NTA_BIG_INTEGER and NTA_DOUBLE_PRECISION.

This has some problems:

  • Not obvious how many bits are being used when reading the code, added indirection
  • Many places that only need 32 bits are using twice that with no benefit
  • Complicates the code with preprocessor conditionals
  • Must use 64 bit serialization if we want to maintain precision across serialization

I propose that we:

  • Change algorithms to use explicit precision (32 bit rather than 64 in most, if not all, cases) for all attributes
  • I'm not sure if we should keep the unspecified versions or not, curious what @subutai thinks. I suppose they might make sense for looping over a range of integers where the precision doesn't matter and you just want whatever matches the platform. I think the original intent was to allow for high-precision computations if you needed that for some application but I don't see how that would actually help anything as currently implemented.

@subutai - can you provide some historical context?

CC @chetan51 @oxtopus

scottpurdy avatar Feb 19 '15 21:02 scottpurdy

Hmm, not sure I remember all the details. I like the idea of using explicit precision as many places as possible. I don't know why someone might want unspecified versions around - we could make the default Int be explicitly 32 bit. Sometimes 32 bit operations are actually slower than 64 bit operations on a 64 bit platform, but that is really rare. In those cases we could explicitly specify 64 bit and maybe guard the loop with architecture #define's for PI support.

subutai avatar Feb 24 '15 16:02 subutai

@subutai - thanks for the info. I like what you suggest for using explicit precision everywhere unless we want to optimize for speed and don't care about the precision. Perhaps rather than putting the #define's throughout the code we just set the unspecified types to be platform-specific and use those where we don't care about precision and just want speed?

scottpurdy avatar Feb 24 '15 17:02 scottpurdy

@subutai - one example where it would be nice to have a type that matches the platform bitness is here: https://github.com/numenta/nupic.core/pull/343/files#diff-5fc4121d5c281de47186c9f30e425bdbR3782

The optimization wants to do arithmetic on a pointer so it needs a type that matches the platform.

scottpurdy avatar Feb 24 '15 18:02 scottpurdy

OK, though wouldn't it be strange for the unspecified type to be platform specific? I guess that's how C is anyway.

subutai avatar Feb 24 '15 19:02 subutai