Skip to content

Inference with dictionary of pytorch tensors #147

Closed Answered by jonathan-booth
jonathan-booth asked this question in Q&A
Discussion options

You must be logged in to vote

The method of loading the model and modifying the input has worked. I'm now trying to modify the output as well because the output is a Dict[str: tensor]. This is proving difficult. Does FTorch work with models that have been scripted with torch.jit.script? I think that's the only way I can save my modified model. I do the following:

class ModifiedModel(torch.nn.Module):
    def __init__(self, original_model):
        super().__init__()
        self.original_model = original_model
    
    def forward(self, new_input):
        processed_input = self.change_input(new_input)
        old_output = self.original_model(processed_input)
        new_output=self.change_output(old_output)

        …

Replies: 4 comments 8 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
6 replies
@TomMelt
Comment options

@jatkinson1000
Comment options

@ElliottKasoar
Comment options

@jonathan-booth
Comment options

Answer selected by jonathan-booth
@jatkinson1000
Comment options

Comment options

You must be logged in to vote
2 replies
@jatkinson1000
Comment options

@jonathan-booth
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants