fix: Do not mutate external TensorInfo in import_memory()#1270
fix: Do not mutate external TensorInfo in import_memory()#1270
Conversation
gunes-arm
left a comment
There was a problem hiding this comment.
Sorry, these were comments from last week. I forgot to post. Feel free to take it with another reviewer. If not, I'll prioritize this when I get back.
fb2d829 to
70794ba
Compare
| TensorInfo _info_owned{}; /**< Tensor's metadata. */ | ||
| TensorInfo *_info_external{nullptr}; /**< External Tensor's metadata */ | ||
| size_t _alignment{}; /**< Tensor's alignment in bytes */ | ||
| bool _owns_info{true}; /**< True when allocator owns metadata; false for soft_init(). */ |
There was a problem hiding this comment.
I think this field is extra. From other places, I can see that _info_external is checked against nullptr to decide if the info is owned or not. It seems like having a member function named owns_info that checks _info_external != nullptr is the best way if we need a convenient method.
For example, info is returned based on this check, not the member _owns_info.
TensorInfo &ITensorAllocator::info()
{
return (_info_external != nullptr) ? *_info_external : _info_owned;
}
There was a problem hiding this comment.
I'm confused, do you want us to remove _owns_info ?
There was a problem hiding this comment.
What I'm challenging is why do we need it, given that we have another logic already?
There was a problem hiding this comment.
Good point, just for clarity I suppose. Happy to remove it and just use _info_external != nullptr`
…soft_init When a TensorInfo is shared with a Tensor via allocator()->soft_init(), the allocator stores a reference to the caller-provided TensorInfo. If import_memory() is later called, the allocator currently sets TensorInfo::is_resizable(false). Because the TensorInfo is shared, this mutates the caller-owned metadata. Subsequent operations that expect the TensorInfo to remain resizable (e.g. extend_padding()) can fail. This is observable in multi-threaded or reused inference paths (for example multi-threaded CpuGemmConv2d), where the same TensorInfo is reused across runs. In such cases import_memory() unexpectedly changes the caller-visible TensorInfo state. Fix this by distinguishing between allocator-owned and external TensorInfo instances: - Track ownership with `_owns_info`. - Track imported buffers with `_is_imported`. - `soft_init()` marks the TensorInfo as externally owned. - `import_memory()` only mutates `TensorInfo::is_resizable()` when the allocator owns the TensorInfo. - When referencing external TensorInfo, allocator logic relies on the internal `_is_imported` state instead of mutating the caller metadata. This preserves existing behaviour for allocator-owned TensorInfo while avoiding unexpected mutations of caller-provided metadata. The change is minimal and does not modify public APIs. Test: - Add/enable UNIT test `ImportMemoryDoesNotMutateExternalInfo`. - Verify that `shared_info.is_resizable()` remains true after import_memory() and that `extend_padding()` succeeds. Change-Id: Iac9d5807789a8ed74205cdee098b36d0f5269d59 Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
70794ba to
36ba422
Compare
| ARM_COMPUTE_EXPECT(shared_info.is_resizable(), framework::LogLevel::ERRORS); | ||
|
|
||
| // extend_padding should succeed (not throw). | ||
| ARM_COMPUTE_EXPECT_NO_THROW(shared_info.extend_padding(PaddingSize(1, 1, 1, 1)), framework::LogLevel::ERRORS); |
There was a problem hiding this comment.
I have some doubts regarding allowing user to extend padding after they import memory. We were disallowing it, but after this change, we're allowing it.
For example, shared_info has been used as a soft initializer for view_tensor. The view_tensor's buffer has been imported from out_tensor, which has a certain size, 16 x 4 x 4 x sizeof(data_type), e.g. 512 bytes if it's Fp16.
Once shared_info's padding is extended, the view_tensor's strides change. If you were to use view_tensor for any purpose after this line, such as printing, filling in with values, feeding it to a function, it won't be any good. It'll potentially cause segfault, or incorrect results being calculated at the least because it'll try to write out of bounds because of the paddings.
When a TensorInfo is shared with a Tensor via allocator()->soft_init(), the allocator stores a reference to the caller-provided TensorInfo. If import_memory() is later called, the allocator currently sets TensorInfo::is_resizable(false). Because the TensorInfo is shared, this mutates the caller-owned metadata. Subsequent operations that expect the TensorInfo to remain resizable (e.g. extend_padding()) can fail.
This is observable in multi-threaded or reused inference paths (for example multi-threaded CpuGemmConv2d), where the same TensorInfo is reused across runs. In such cases import_memory() unexpectedly changes the caller-visible TensorInfo state. Fix this by distinguishing between allocator-owned and external TensorInfo instances:
_owns_info._is_imported.soft_init()marks the TensorInfo as externally owned.import_memory()only mutatesTensorInfo::is_resizable()when the allocator owns the TensorInfo._is_importedstate instead of mutating the caller metadata.This preserves existing behaviour for allocator-owned TensorInfo while avoiding unexpected mutations of caller-provided metadata.
The change is minimal and does not modify public APIs.
Test:
ImportMemoryDoesNotMutateExternalInfo.shared_info.is_resizable()remains true after import_memory() and thatextend_padding()succeeds.Change-Id: Iac9d5807789a8ed74205cdee098b36d0f5269d59