Skip to content

Remove num dim usages in xnnpack flatbuffer#19013

Open
lucylq wants to merge 1 commit intomainfrom
lfq.remove-num-dim-usages
Open

Remove num dim usages in xnnpack flatbuffer#19013
lucylq wants to merge 1 commit intomainfrom
lfq.remove-num-dim-usages

Conversation

@lucylq
Copy link
Copy Markdown
Contributor

@lucylq lucylq commented Apr 20, 2026

There's some redundancy here where num_dims is serialized into the flatbuffer. Use size from the dims array directly, as if we use num_dims, we have to check it matches up with the dims array size.

Hopefully prevent some security issues down the line.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19013

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 9ee60b4 with merge base 2d53535 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 20, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@lucylq lucylq force-pushed the lfq.remove-num-dim-usages branch from 5e6601b to 9ee60b4 Compare April 21, 2026 20:58
tensor_value->num_dims(),
tensor_value->dims()->size());
}

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah this was recently added but if we use tensor_value->dims()->size() directly then we also don't need the check on tensor_value->num_dims()..

@lucylq lucylq marked this pull request as ready for review April 21, 2026 23:11
@lucylq lucylq requested a review from digantdesai as a code owner April 21, 2026 23:11
Copilot AI review requested due to automatic review settings April 21, 2026 23:11
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the XNNPACK flatbuffer runtime deserialization path to stop relying on the serialized num_dims field and instead derive tensor rank directly from the dims vector length, reducing redundancy and avoiding mismatches between num_dims and dims.

Changes:

  • Removes num_dims/dims consistency checks in defineTensor and builds rank from dims()->size().
  • Passes dims_data.size() to XNNPACK xnn_define_* APIs instead of tensor_value->num_dims().
  • Updates debug logging format to print size_t correctly.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 584 to 588
/*datatype=*/datatype,
/*zero_point=*/zero_point,
/*scale=*/scale_data,
/*num_dims=*/tensor_value->num_dims(),
/*num_dims=*/dims_data.size(),
/*channel_dim=*/qparams->channel_dim(),
Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PerChannelGroupQuant path still dereferences tensor_value->dims()->Get(0/1) earlier in this case block without any null / length validation. Since the earlier num_dims/dims consistency checks were removed, a malformed (or malicious) flatbuffer with dims == nullptr (or a shorter-than-expected dims vector) can now crash here instead of returning InvalidProgram. Please add an ET_CHECK_OR_RETURN_ERROR(dims_data.size() >= 2, InvalidProgram, ...) (and use dims_data[0]/dims_data[1]) before computing output_channels/input_channels so this remains robust against corrupted inputs.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. security-fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants