Fix issue with sorting when simulating more than 2 billion neurons
When using Boost's integer_sort method, we were currently sorting the Sources by converting them to ints, which on most platforms are represented using 32 bits. For global node ids requiring more than 32 bits (>2,147,483,647), the sorting fails.
Is it not better to use decltype to deduce the return type?
decltype(boost::get<0>(s).get_node_id()) so basically you don't care about the precision type returned by the function, it will be always correct.
Is it not better to use
decltypeto deduce the return type?
decltype(boost::get<0>(s).get_node_id())so basically you don't care about the precision type returned by the function, it will be always correct.
I hade been wondering if that was possible, but I also sometimes feel that with these new C++ constructs, one can make code more difficult to read. A plain size_t is much easier to parse and interpret that a 40 characted decltype(...).
@heplesser (cc @JanVogelsang) I get your point, but maybe the return type of Source::get_node_id might change in the future, or maybe we use the sort on different types; therefore, I suggest the following:
template < typename T >
inline uint64_t
operator()( boost::tuples::tuple< nest::Source&, T& > s, unsigned offset )
{
using RetType = decltype( boost::get< 0 >( s ).get_node_id() );
constexpr auto lower_bound = std::numeric_limits<uint64_t >::lowest() <= std::numeric_limits< RetType >::lowest();
constexpr auto upper_bound = std::numeric_limits<uint64_t>::max() >= std::numeric_limits< RetType >::max();
static_assert( lower_bound && upper_bound, "Lossy conversion" );
return boost::get< 0 >( s ).get_node_id() >> offset;
}