NXP backend: added support for aten.conv_transpose1 and refactored convolution_converter#19004
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19004
Note: Links to docs will display an error until the docs builds have been completed. ❌ 11 Awaiting Approval, 1 New FailureAs of commit 8568135 with merge base 063f9c9 ( AWAITING APPROVAL - The following workflows need approval before CI can run:
NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "release notes: nxp" |
|
@pytorchbot label "module: nxp" |
There was a problem hiding this comment.
Pull request overview
Adds NXP backend support for aten.conv_transpose1d by moving 1D-to-2D convolution lowering out of the IR converter and into a dedicated ATen graph rewrite pass, plus adjusts quantization handling for grouped transposed convolutions.
Changes:
- Introduce
ConvertConv1dToConv2dPassto rewriteaten.conv1d/aten.conv_transpose1dinto 2D equivalents via unsqueeze/conv2d(or conv_transpose2d)/squeeze. - Remove 1D-convolution handling from the TFLite
convolution_converterand enable the new pass in the default Neutron ATen pass pipeline. - Update quantizer patterns/utilities to correctly derive bias qparams for grouped
conv_transpose2dand fix per-channel axis handling; add comprehensive tests for the new pass.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| backends/nxp/aten_passes/convert_1d_conv_to_2d.py | New ATen pass converting 1D conv/transposed conv to 2D form with shape/meta propagation. |
| backends/nxp/aten_passes/neutron_aten_pass_manager.py | Registers the new pass in the default Neutron ATen pass sequence. |
| backends/nxp/backend/ir/converter/node_converters/ops_converters/convolution_converter.py | Removes 1D convolution conversion logic; now expects only 2D weights rank. |
| backends/nxp/quantizer/utils.py | Adds helper to “pad”/repeat weight scales when deriving bias qparams for grouped transposed conv. |
| backends/nxp/quantizer/patterns.py | Drops 1D conv patterns; updates ConvTranspose2d quantization (bias qparams + correct per-channel axis). |
| backends/nxp/quantizer/neutron_quantizer.py | Removes the Conv1dPattern registration (since 1D conv is rewritten earlier). |
| backends/nxp/tests/test_convert_1d_conv_to_2d.py | New test suite covering conv1d + conv_transpose1d rewrite and full pipeline delegation. |
| backends/nxp/tests/models.py | Updates Conv1d test module API and adds ConvTranspose1d + runtime-weight conv1d models for testing. |
| backends/nxp/tests/ir/converter/node_converter/test_conv_converter.py | Removes prior conv1d conversion tests (superseded by new pass tests). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| class Conv1dModule(torch.nn.Module): | ||
| def __init__( | ||
| self, | ||
| bias: bool = True, | ||
| dilation: Union[int, tuple[int, int]] = 1, | ||
| in_channels: int = 4, | ||
| kernel_size: Union[int, tuple[int, int]] = 3, | ||
| out_channels: int = 8, | ||
| padding: Union[str, int, Collection[int]] = 0, | ||
| kernel_size: Union[int, tuple[int, int]] = 3, | ||
| stride: Union[int, tuple[int, int]] = 2, | ||
| group: int = 1, | ||
| padding: Union[str, int, tuple[int]] = 0, | ||
| dilation: Union[int, tuple[int, int]] = 1, | ||
| groups: int = 1, | ||
| bias: bool = True, | ||
| ): |
There was a problem hiding this comment.
Conv1dModule's constructor now takes groups instead of the previously used group. This is inconsistent with Conv2dModule/Conv3dModule (which still use group) and breaks existing call sites (e.g. backends/nxp/tests/test_split_group_convolution.py passes group=...). Consider keeping group for consistency/backward compatibility (or accepting both group and groups) and updating all callers accordingly.
| @@ -33,13 +34,64 @@ def __init__( | |||
| padding=padding, | |||
| dilation=dilation, | |||
| bias=bias, | |||
| groups=group, | |||
| groups=groups, | |||
| ) | |||
|
|
|||
| def forward(self, x): | |||
| return self.conv(x) | |||
|
|
|||
|
|
|||
| class ConvTranspose1dModule(torch.nn.Module): | |||
| def __init__( | |||
| self, | |||
| in_channels: int = 4, | |||
| out_channels: int = 8, | |||
| kernel_size: Union[int, tuple[int, int]] = 3, | |||
| stride: Union[int, tuple[int, int]] = 1, | |||
| padding: Union[int, tuple[int]] = 0, | |||
| output_padding: Union[int, tuple[int]] = 0, | |||
| groups: int = 1, | |||
| bias: bool = True, | |||
| dilation: Union[int, tuple[int, int]] = 1, | |||
| ): | |||
There was a problem hiding this comment.
The type hints for 1D convolution parameters use tuple[int, int] (e.g. for kernel_size, stride, dilation), but Conv1d/ConvTranspose1d accept either an int or a 1-element tuple. Using a 2-tuple type here is misleading; consider changing these annotations to int | tuple[int] (or int | tuple[int, ...] if you want to allow variable length tuples) to match the actual API.
| return node.meta["val"].shape if hasattr(node, "meta") else node.shape | ||
|
|
||
| @staticmethod | ||
| def _get_node_dtype(node: Node): | ||
| return node.meta["val"].dtype if hasattr(node, "meta") else node.dtype |
There was a problem hiding this comment.
_get_node_shape/_get_node_dtype check hasattr(node, "meta"), but torch.fx.Node always has a meta dict; if meta["val"] is missing this will raise a KeyError and the fallback to node.shape/node.dtype will never be used. Consider checking for "val" in node.meta (or node.meta.get("val")) and only indexing when present.
| return node.meta["val"].shape if hasattr(node, "meta") else node.shape | |
| @staticmethod | |
| def _get_node_dtype(node: Node): | |
| return node.meta["val"].dtype if hasattr(node, "meta") else node.dtype | |
| meta_val = node.meta.get("val") if hasattr(node, "meta") else None | |
| return meta_val.shape if meta_val is not None else node.shape | |
| @staticmethod | |
| def _get_node_dtype(node: Node): | |
| meta_val = node.meta.get("val") if hasattr(node, "meta") else None | |
| return meta_val.dtype if meta_val is not None else node.dtype |
| # insert back the bias node argument (= None) if it was taken out earlier | ||
| node_args = fake_node_args if has_b_node else fake_node_args + [None] |
There was a problem hiding this comment.
In _create_some_conv_2d_node, the local node_args = fake_node_args if has_b_node else fake_node_args + [None] is assigned but never used. This looks like leftover code from a prior version; removing it (or using it consistently) would reduce confusion when maintaining this pass.
| # insert back the bias node argument (= None) if it was taken out earlier | |
| node_args = fake_node_args if has_b_node else fake_node_args + [None] | |
| # scalar_args already preserves the original bias position when bias is None |
Summary
Added support for
aten.conv_transpose1dby moving functionality fromconvolution_converterto brand newconvert_1d_conv_to2daten pass, and extending it.Test plan
tests can be manually run using
pytest -c /dev/null backends/nxp/tests/cc @robert-kalmar @JakeStevens @digantdesai @MartinPavella