Skip to content

NXP backend: added support for aten.conv_transpose1 and refactored convolution_converter#19004

Draft
novak-vaclav wants to merge 1 commit intopytorch:mainfrom
nxp-upstream:feature/EIEX-681-add-transposed-conv-1d-support
Draft

NXP backend: added support for aten.conv_transpose1 and refactored convolution_converter#19004
novak-vaclav wants to merge 1 commit intopytorch:mainfrom
nxp-upstream:feature/EIEX-681-add-transposed-conv-1d-support

Conversation

@novak-vaclav
Copy link
Copy Markdown
Contributor

@novak-vaclav novak-vaclav commented Apr 20, 2026

Summary

Added support for aten.conv_transpose1d by moving functionality from convolution_converter to brand new convert_1d_conv_to2d aten pass, and extending it.

Test plan

tests can be manually run using pytest -c /dev/null backends/nxp/tests/

cc @robert-kalmar @JakeStevens @digantdesai @MartinPavella

Copilot AI review requested due to automatic review settings April 20, 2026 16:34
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19004

Note: Links to docs will display an error until the docs builds have been completed.

❌ 11 Awaiting Approval, 1 New Failure

As of commit 8568135 with merge base 063f9c9 (image):

AWAITING APPROVAL - The following workflows need approval before CI can run:

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 20, 2026
@novak-vaclav
Copy link
Copy Markdown
Contributor Author

@pytorchbot label "release notes: nxp"

@pytorch-bot pytorch-bot Bot added the release notes: nxp Changes to the NXP Neutron backend delegate label Apr 20, 2026
@novak-vaclav
Copy link
Copy Markdown
Contributor Author

@pytorchbot label "module: nxp"

@pytorch-bot pytorch-bot Bot added the module: nxp Issues related to NXP Neutron NPU delegation and code under backends/nxp/ label Apr 20, 2026
@novak-vaclav novak-vaclav marked this pull request as draft April 20, 2026 16:37
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds NXP backend support for aten.conv_transpose1d by moving 1D-to-2D convolution lowering out of the IR converter and into a dedicated ATen graph rewrite pass, plus adjusts quantization handling for grouped transposed convolutions.

Changes:

  • Introduce ConvertConv1dToConv2dPass to rewrite aten.conv1d / aten.conv_transpose1d into 2D equivalents via unsqueeze/conv2d(or conv_transpose2d)/squeeze.
  • Remove 1D-convolution handling from the TFLite convolution_converter and enable the new pass in the default Neutron ATen pass pipeline.
  • Update quantizer patterns/utilities to correctly derive bias qparams for grouped conv_transpose2d and fix per-channel axis handling; add comprehensive tests for the new pass.

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
backends/nxp/aten_passes/convert_1d_conv_to_2d.py New ATen pass converting 1D conv/transposed conv to 2D form with shape/meta propagation.
backends/nxp/aten_passes/neutron_aten_pass_manager.py Registers the new pass in the default Neutron ATen pass sequence.
backends/nxp/backend/ir/converter/node_converters/ops_converters/convolution_converter.py Removes 1D convolution conversion logic; now expects only 2D weights rank.
backends/nxp/quantizer/utils.py Adds helper to “pad”/repeat weight scales when deriving bias qparams for grouped transposed conv.
backends/nxp/quantizer/patterns.py Drops 1D conv patterns; updates ConvTranspose2d quantization (bias qparams + correct per-channel axis).
backends/nxp/quantizer/neutron_quantizer.py Removes the Conv1dPattern registration (since 1D conv is rewritten earlier).
backends/nxp/tests/test_convert_1d_conv_to_2d.py New test suite covering conv1d + conv_transpose1d rewrite and full pipeline delegation.
backends/nxp/tests/models.py Updates Conv1d test module API and adds ConvTranspose1d + runtime-weight conv1d models for testing.
backends/nxp/tests/ir/converter/node_converter/test_conv_converter.py Removes prior conv1d conversion tests (superseded by new pass tests).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 15 to 26
class Conv1dModule(torch.nn.Module):
def __init__(
self,
bias: bool = True,
dilation: Union[int, tuple[int, int]] = 1,
in_channels: int = 4,
kernel_size: Union[int, tuple[int, int]] = 3,
out_channels: int = 8,
padding: Union[str, int, Collection[int]] = 0,
kernel_size: Union[int, tuple[int, int]] = 3,
stride: Union[int, tuple[int, int]] = 2,
group: int = 1,
padding: Union[str, int, tuple[int]] = 0,
dilation: Union[int, tuple[int, int]] = 1,
groups: int = 1,
bias: bool = True,
):
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conv1dModule's constructor now takes groups instead of the previously used group. This is inconsistent with Conv2dModule/Conv3dModule (which still use group) and breaks existing call sites (e.g. backends/nxp/tests/test_split_group_convolution.py passes group=...). Consider keeping group for consistency/backward compatibility (or accepting both group and groups) and updating all callers accordingly.

Copilot uses AI. Check for mistakes.
Comment on lines 20 to +56
@@ -33,13 +34,64 @@ def __init__(
padding=padding,
dilation=dilation,
bias=bias,
groups=group,
groups=groups,
)

def forward(self, x):
return self.conv(x)


class ConvTranspose1dModule(torch.nn.Module):
def __init__(
self,
in_channels: int = 4,
out_channels: int = 8,
kernel_size: Union[int, tuple[int, int]] = 3,
stride: Union[int, tuple[int, int]] = 1,
padding: Union[int, tuple[int]] = 0,
output_padding: Union[int, tuple[int]] = 0,
groups: int = 1,
bias: bool = True,
dilation: Union[int, tuple[int, int]] = 1,
):
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type hints for 1D convolution parameters use tuple[int, int] (e.g. for kernel_size, stride, dilation), but Conv1d/ConvTranspose1d accept either an int or a 1-element tuple. Using a 2-tuple type here is misleading; consider changing these annotations to int | tuple[int] (or int | tuple[int, ...] if you want to allow variable length tuples) to match the actual API.

Copilot uses AI. Check for mistakes.
Comment on lines +67 to +71
return node.meta["val"].shape if hasattr(node, "meta") else node.shape

@staticmethod
def _get_node_dtype(node: Node):
return node.meta["val"].dtype if hasattr(node, "meta") else node.dtype
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_get_node_shape/_get_node_dtype check hasattr(node, "meta"), but torch.fx.Node always has a meta dict; if meta["val"] is missing this will raise a KeyError and the fallback to node.shape/node.dtype will never be used. Consider checking for "val" in node.meta (or node.meta.get("val")) and only indexing when present.

Suggested change
return node.meta["val"].shape if hasattr(node, "meta") else node.shape
@staticmethod
def _get_node_dtype(node: Node):
return node.meta["val"].dtype if hasattr(node, "meta") else node.dtype
meta_val = node.meta.get("val") if hasattr(node, "meta") else None
return meta_val.shape if meta_val is not None else node.shape
@staticmethod
def _get_node_dtype(node: Node):
meta_val = node.meta.get("val") if hasattr(node, "meta") else None
return meta_val.dtype if meta_val is not None else node.dtype

Copilot uses AI. Check for mistakes.
Comment on lines +95 to +96
# insert back the bias node argument (= None) if it was taken out earlier
node_args = fake_node_args if has_b_node else fake_node_args + [None]
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In _create_some_conv_2d_node, the local node_args = fake_node_args if has_b_node else fake_node_args + [None] is assigned but never used. This looks like leftover code from a prior version; removing it (or using it consistently) would reduce confusion when maintaining this pass.

Suggested change
# insert back the bias node argument (= None) if it was taken out earlier
node_args = fake_node_args if has_b_node else fake_node_args + [None]
# scalar_args already preserves the original bias position when bias is None

Copilot uses AI. Check for mistakes.
@MartinPavella MartinPavella self-requested a review April 21, 2026 06:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: nxp Issues related to NXP Neutron NPU delegation and code under backends/nxp/ release notes: nxp Changes to the NXP Neutron backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants