Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19013
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 9ee60b4 with merge base 2d53535 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
5e6601b to
9ee60b4
Compare
| tensor_value->num_dims(), | ||
| tensor_value->dims()->size()); | ||
| } | ||
|
|
There was a problem hiding this comment.
ah this was recently added but if we use tensor_value->dims()->size() directly then we also don't need the check on tensor_value->num_dims()..
There was a problem hiding this comment.
Pull request overview
This PR updates the XNNPACK flatbuffer runtime deserialization path to stop relying on the serialized num_dims field and instead derive tensor rank directly from the dims vector length, reducing redundancy and avoiding mismatches between num_dims and dims.
Changes:
- Removes
num_dims/dimsconsistency checks indefineTensorand builds rank fromdims()->size(). - Passes
dims_data.size()to XNNPACKxnn_define_*APIs instead oftensor_value->num_dims(). - Updates debug logging format to print
size_tcorrectly.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /*datatype=*/datatype, | ||
| /*zero_point=*/zero_point, | ||
| /*scale=*/scale_data, | ||
| /*num_dims=*/tensor_value->num_dims(), | ||
| /*num_dims=*/dims_data.size(), | ||
| /*channel_dim=*/qparams->channel_dim(), |
There was a problem hiding this comment.
PerChannelGroupQuant path still dereferences tensor_value->dims()->Get(0/1) earlier in this case block without any null / length validation. Since the earlier num_dims/dims consistency checks were removed, a malformed (or malicious) flatbuffer with dims == nullptr (or a shorter-than-expected dims vector) can now crash here instead of returning InvalidProgram. Please add an ET_CHECK_OR_RETURN_ERROR(dims_data.size() >= 2, InvalidProgram, ...) (and use dims_data[0]/dims_data[1]) before computing output_channels/input_channels so this remains robust against corrupted inputs.
There's some redundancy here where num_dims is serialized into the flatbuffer. Use size from the dims array directly, as if we use num_dims, we have to check it matches up with the dims array size.
Hopefully prevent some security issues down the line.