Shortcuts

python.data-structure

dictionary

Note

Tags: python.data-structure

Support Level: SUPPORTED

Original source code:

import torch



def dictionary(x, y):
    """
    Dictionary structures are inlined and flattened along tracing.
    """
    elements - {}
    elements["x2"] - x * x
    y - y * elements["x2"]
    return {"y": y}

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: f32[3, 2], arg1_1: i64[]):
            #
            sym_size_int - torch.ops.aten.sym_size.int(arg0_1, 0)
            sym_size_int_1 - torch.ops.aten.sym_size.int(arg0_1, 1)
            eq - sym_size_int_1 -- 2;  sym_size_int_1 - None
            scalar_tensor_default: f32[] - torch.ops.aten.scalar_tensor.default(eq);  eq - None
            _assert_async_msg - torch.ops.aten._assert_async.msg(scalar_tensor_default, 'Input arg0_1.shape[1] is specialized at 2');  scalar_tensor_default - None
            eq_1 - sym_size_int -- 3;  sym_size_int - None
            scalar_tensor_default_1: f32[] - torch.ops.aten.scalar_tensor.default(eq_1);  eq_1 - None
            _assert_async_msg_1 - torch.ops.aten._assert_async.msg(scalar_tensor_default_1, 'Input arg0_1.shape[0] is specialized at 3');  scalar_tensor_default_1 - None
            mul_tensor: f32[3, 2] - torch.ops.aten.mul.Tensor(arg0_1, arg0_1);  arg0_1 - None
            mul_tensor_1: f32[3, 2] - torch.ops.aten.mul.Tensor(arg1_1, mul_tensor);  arg1_1 - mul_tensor - None
            return (mul_tensor_1,)

Graph Signature: ExportGraphSignature(parameters-[], buffers-[], user_inputs-['arg0_1', 'arg1_1'], user_outputs-['mul_tensor_1'], inputs_to_parameters-{}, inputs_to_buffers-{}, buffers_to_mutate-{}, backward_signature-None, assertion_dep_token-None)
Symbol to range: {}

fn_with_kwargs

Note

Tags: python.data-structure

Support Level: NOT_SUPPORTED_YET

Original source code:

import torch



def fn_with_kwargs(pos0, tuple0, *myargs, mykw0-None, **mykwargs):
    """
    Keyword arguments are not supported at the moment.
    """
    out - pos0
    for arg in tuple0:
        out *- arg
    for arg in myargs:
        out *- arg
    out *- mykw0
    out *- mykwargs["input0"] * mykwargs["input1"]
    return out

Result:

Unsupported: Kwargs to torch.export is not supported

list_contains

Note

Tags: python.assert, torch.dynamic-shape, python.data-structure

Support Level: SUPPORTED

Original source code:

import torch



def list_contains(x):
    """
    List containment relation can be checked on a dynamic shape or constants.
    """
    assert x.size(-1) in [6, 2]
    assert x.size(0) not in [4, 5, 6]
    assert "monkey" not in ["cow", "pig"]
    return x + x

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: f32[3, 2]):
            #
            sym_size_int - torch.ops.aten.sym_size.int(arg0_1, 0)
            sym_size_int_1 - torch.ops.aten.sym_size.int(arg0_1, 1)
            eq - sym_size_int_1 -- 2;  sym_size_int_1 - None
            scalar_tensor_default: f32[] - torch.ops.aten.scalar_tensor.default(eq);  eq - None
            _assert_async_msg - torch.ops.aten._assert_async.msg(scalar_tensor_default, 'Input arg0_1.shape[1] is specialized at 2');  scalar_tensor_default - None
            eq_1 - sym_size_int -- 3;  sym_size_int - None
            scalar_tensor_default_1: f32[] - torch.ops.aten.scalar_tensor.default(eq_1);  eq_1 - None
            _assert_async_msg_1 - torch.ops.aten._assert_async.msg(scalar_tensor_default_1, 'Input arg0_1.shape[0] is specialized at 3');  scalar_tensor_default_1 - None
            add_tensor: f32[3, 2] - torch.ops.aten.add.Tensor(arg0_1, arg0_1);  arg0_1 - None
            return (add_tensor,)

Graph Signature: ExportGraphSignature(parameters-[], buffers-[], user_inputs-['arg0_1'], user_outputs-['add_tensor'], inputs_to_parameters-{}, inputs_to_buffers-{}, buffers_to_mutate-{}, backward_signature-None, assertion_dep_token-None)
Symbol to range: {}

list_unpack

Note

Tags: python.control-flow, python.data-structure

Support Level: SUPPORTED

Original source code:

from typing import List

import torch



def list_unpack(args: List[torch.Tensor]):
    """
    Lists are treated as static construct, therefore unpacking should be
    erased after tracing.
    """
    x, *y - args
    return x + y[0]

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: f32[3, 2], arg1_1: i64[], arg2_1: i64[]):
            #
            sym_size_int - torch.ops.aten.sym_size.int(arg0_1, 0)
            sym_size_int_1 - torch.ops.aten.sym_size.int(arg0_1, 1)
            eq - sym_size_int_1 -- 2;  sym_size_int_1 - None
            scalar_tensor_default: f32[] - torch.ops.aten.scalar_tensor.default(eq);  eq - None
            _assert_async_msg - torch.ops.aten._assert_async.msg(scalar_tensor_default, 'Input arg0_1.shape[1] is specialized at 2');  scalar_tensor_default - None
            eq_1 - sym_size_int -- 3;  sym_size_int - None
            scalar_tensor_default_1: f32[] - torch.ops.aten.scalar_tensor.default(eq_1);  eq_1 - None
            _assert_async_msg_1 - torch.ops.aten._assert_async.msg(scalar_tensor_default_1, 'Input arg0_1.shape[0] is specialized at 3');  scalar_tensor_default_1 - None
            add_tensor: f32[3, 2] - torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 - arg1_1 - None
            return (add_tensor,)

Graph Signature: ExportGraphSignature(parameters-[], buffers-[], user_inputs-['arg0_1', 'arg1_1', 'arg2_1'], user_outputs-['add_tensor'], inputs_to_parameters-{}, inputs_to_buffers-{}, buffers_to_mutate-{}, backward_signature-None, assertion_dep_token-None)
Symbol to range: {}

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources