Home | Libraries | People | FAQ | More |
All three of the double-exponential integrators are thread safe as long as BOOST_MATH_NO_ATOMIC_INT is not set. Since the integrators store a large amount of fairly hard to compute data, it is recommended that these objects are stored and reused as much as possible.
Internally all three of the double-exponential integrators use the same caching strategy: they allocate all the vectors needed to store the maximum permitted levels, but only populate the first few levels when constructed. This means a minimal amount of memory is actually allocated when the integrator is first constructed, and already populated levels can be accessed via a lockfree atomic read, and only populating new levels requires a thread lock.
In addition, the three built in types (plus __float128
when available), have the first 7 levels pre-computed: this is generally
sufficient for the vast majority of integrals - even at quad precision -
and means that integrators for these types are relatively cheap to construct.