diff --git a/README.md b/README.md index 976bc63c..c0649bde 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,7 @@ requests for features and clarifications are welcome. ## :rocket: [Guidebook](https://mklab-iti.github.io/JGNN/) +## :dart: [Javadoc](https://mklab-iti.github.io/JGNN/docs/javadoc/) ## :notebook: Citation diff --git a/docs/index.html b/docs/index.html index fe2455a4..12cce141 100644 --- a/docs/index.html +++ b/docs/index.html @@ -196,7 +196,7 @@

1. Setup

The simplest way to set up JGNN is to download it as JAR package from the project's releases and add it in a Java project's dependencies. However, those working with Maven - or Gradle can also add JGNN's latest release as a dependency from the JitPack + or Gradle can also add JGNN's latest nightly release as a dependency from the JitPack repository. Follow the link below for full instructions.
download JGNN

@@ -213,7 +213,7 @@

1. Setup

<dependency> <groupId>com.github.MKLab-ITI</groupId> <artifactId>JGNN</artifactId> - <version>v1.3.20-nightly</version> + <version>v1.3.24-nightly</version> </dependency> </dependencies> @@ -221,8 +221,9 @@

1. Setup

2. Quickstart

-

Here we demonstrate usage of JGNN for node classification, that is, the inductive learning - task of inferring node labels given a graph's structure, node features, and a few node labels. +

Here we demonstrate usage of JGNN for node classification. This is an inductive learning + task where node labels are predicted given a graph's structure, node features, and a few known + labels in the graph. Classifying graphs is also supported, but it is a harder task to explain and set up. GNN architectures for the chosen node classification task are typically written as message-passing mechanisms; these diffuse node representations across edges, where @@ -264,13 +265,13 @@

2. Quickstart

  • classify - Adds a softmax layer tailored to classification. This also silently declares an input nodes that represents a list of node indices where the outputs should be computed.
  • autosize - Automatically sizes matrix and vector dimensions filled by ?. This requires some input example, and here we provide a list of node identifiers, which we also make dataless (have only the correct dimensions without allocating memory). This method also checks for integrity errors in the declared architecture, such as computational paths that do not lead to an output.
  • - The abode methods support a method chain paradigm where the modelBuilder instance is returned by each of - its methods to access the next one. Below we use this builder to implement the Graph Convolutional Network (GCN) + JGNN promotes method chains, where the modelBuilder instance is returned by each of + its methods to access the next one. Below we use this programming pattern to implement the Graph Convolutional Network (GCN) architecture [Kipf and Welling, 2017]. - Details on the symbolic parts of definitions are presented later but, for the time being, we point to usage - of the matrix and vector builtins to inline declarations of learnable parameter for - given dimensions and regularization. - The builder stores internally a constructed model, obtained through modelBuilder.getModel(). + Details on the symbolic parts of definitions are presented later but, for the time being, we point to + the matrix and vector Neuralang functions. These inline declarations of learnable parameter for + given dimensions and regularization. The builder stores internally a constructed model, and the latter + can be retrieved through modelBuilder.getModel().

    long numSamples = dataset.samples().getSlice().size();
    @@ -294,13 +295,14 @@ 

    2. Quickstart

    accept a method chain notation to set their parameters, like the number of epochs, patience for early stopping, the employed optimizer, and loss functions. An example is presented below, where Adam optimization with learning rate 0.01 is performed, and a verbose - variation of a validation loss keeps track of training progress. To run a full training process, + variation of a validation loss prints the progress progress. To run a full training process, the defined strategy is passed to the model alongside input data, corresponding output data, as well as training and validation slices.

    -

    Notice how, before training starts, a parameter initializer is also applied on the model for a cold +

    Notice how, before training starts, a parameter initializer is also applied on the model for cold start. Selecting an initilizer is not part of training strategies to signify its model-dependent nature; dense layers should maintain the expected - input variances in the output before the first epoch. Moreover, + input variances in the output before the first epoch, and therefore the initializer depends + on the type of activation functions. Moreover, the graph's adjacency matrix and node features are already declared as constants by the FastBuilder constructor, as node classification takes place on the same graph with fully known node features. Instead, what is considered as inputs and outputs in this case @@ -354,17 +356,50 @@

    2. Quickstart

    3. GNN Builders

    -

    We already touched on the subject of GNN architecture builders when the Neuralang language - was first introduced in the introductory demonstration. To recap, there are different kinds of - builders, some of which do not implement all features of the language in favor of - simplifying parts of architecture definitions with hard-coded Java implementations. - Here we cover the base GNNBuilder class that can only parse simple expressions, - the FastBuilder class that introduces node classification boilerplate code, - and the Neuralang class that parses the full language, including function definitions - and handling configurations through expressions.

    +

    We already touched on the subject of GNN model builders in the quickstart section, + where we saw one of them in action. However, there exist different kinds of + builders that offer kinds of convenience during model definition. + All builders support method chain notation. Currently implemented builders are: +

      +
    • GNNBuilder - Parses simple Neuralang expressions. Prefer using FastBuilder, + which offers a superset of this one's functionalities.
    • +
    • FastBuilder - Extends the GNNBuilder class with methods that inject + boilerplate code often used in node classification. Use this builder of your want to keep track + of the whole model definition in one place within Java code.
    • +
    • Neuralang - Extends the GNNBuilder class so that it can parse all aspects + of the Neuralang language, especially the expressions responsible for handling configuration. + Use this builder to maintain model definitions in one place (e.g., packed in one String + variable, or in one read file) to avoid weaving symbolic expressions in Java code.
    • +

    3.1. ModelBuilder

    - [Under construction] +

    This is the base model builder class that is extended by others. We describe it separately + because it offers a wide breadth of functionalities that other builders inherit. Before looking + at how to use it, we need to tackle the concept of models. These are effectively collections + of NNOperation instances, each of which is an operation with specified inputs and outputs. + Models can be manually written using Java code only. As an example, a simple model computing the expression + y=log(2*x+1) is implemented below. This is still readable for simple + expressions, but very cumbersome to handle once actual architectures are created - hence the need for + model builders. This guidebook does not go into details for the available types of NNOperation, + as they are rarely used directly in practice. To read on them, please visit JGNN's Javadoc. +

    + +
    Variable x = new Variable();
    +Constant c1 = new Constant(Tensor.fromDouble(1)); // holds the constant "1"
    +Constant c2 = new Constant(Tensor.fromDouble(2)); // holds the constant "2"
    +NNOperation mult = new Multiply()
    +	.addInput(x)
    +	.addInput(c2);
    +NNOperation add = new Add()
    +	.addInput(mult)
    +	.addInput(c1);
    +NNOperation y = new Log()
    +	.addInput(add);
    +Model model = new Model()
    +	.addInput(x)
    +	.addOutput(y);
    +	
    + diff --git a/docs/javadoc/allclasses-index.html b/docs/javadoc/allclasses-index.html new file mode 100644 index 00000000..6e1bcb73 --- /dev/null +++ b/docs/javadoc/allclasses-index.html @@ -0,0 +1,472 @@ + + + + +All Classes and Interfaces + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    All Classes and Interfaces

    +
    +
    +
    +
    +
    +
    Class
    +
    Description
    + +
    +
    Accesses a column of a Matrix as if it were a dense Tensor.
    +
    + +
    +
    Accesses a row of a Matrix as if it were a dense Tensor.
    +
    + +
    +
    Wraps a base Tensor by traversing only its elements in a specified range (from begin, up to end-1).
    +
    + +
    +
    Implements an accuracy Loss of row-by-row comparisons.
    +
    + +
    +
    Thic class implements an Adam Optimizer as explained in the paper: + Kingma, Diederik P., and Jimmy Ba.
    +
    + +
    +
    Implements a NNOperation that adds its two inputs.
    +
    + +
    +
    Implements a NNOperation that creates a version of adjacency matrices + with column-wise attention involving neighbor similarity.
    +
    + +
    +
    Wraps an Optimizer by accumulating derivatives and calling + Optimizer.update(Tensor, Tensor) with the average derivative + after a fixed number of accumulations.
    +
    + +
    +
    Implements a binary cross-entropy Loss.
    + For more than one output dimensions use CategoricalCrossEntropy
    +
    + +
    +
    Implements a categorical cross-entropy Loss.
    + For binary classification of one output use BinaryCrossEntropy.
    +
    + +
    +
    Downloads and constructs the Citeseer node classification Dataset.
    +
    + +
    +
    Defines a matrix whose columns are all a copy of a Tensor.
    +
    + +
    +
    Implements a NNOperation that performs the operation 1-x for its simple input x.
    +
    + +
    +
    Implements a NNOperation that concatenates its two matrix inputs.
    +
    + +
    +
    Implements a NNOperation that holds a constant tensor.
    +
    + +
    +
    Downloads and constructs the Cora node classification Dataset.
    +
    + +
    +
    This class provides the backbone with which to define datasets.
    +
    + +
    +
    Implements a dense Matrix where all elements are stored in memory.
    +
    + +
    +
    This class provides a dense Tensor that wraps an array of doubles.
    +
    + +
    +
    Implements a square matrix whose diagonal elements are determined by the correspond values of + an underlying tensor and off-diagonal elements are zero.
    +
    + +
    +
    This interface abstracts a probability distribution + that can be passed to Tensor.setToRandom(Distribution) + for random tensor initialization.
    +
    + +
    +
    Implements a NNOperation that converts its first argument to a ColumnRepetition matrix + with a number of columns equal to the second argument.
    +
    + +
     
    + +
     
    + +
    +
    Implements a NNOperation that performs an exponential transformation of its single input.
    +
    + +
    +
    Extends the capabilities of LayeredBuilder to use + for node classification.
    +
    + +
     
    + +
    +
    Implements a NNOperation that lists the first element of the 2D matrix element iterator.
    +
    + +
    +
    Implements a NNOperation that performs the equivalent of TensorFlow's gather operation.
    +
    + +
    +
    Implements a gradient descent Optimizer.
    +
    + +
    +
    Converts back-and-forth between objects and unique ids.
    +
    + +
    +
    Implements a NNOperation that just transfers its single input.
    +
    + +
    +
    This class defines an abstract interface for applying initializers to models.
    +
    + + + + + +
    +
    Implements a NNOperation that performs a L1 transformation of its single input + by row or column.
    +
    + +
    +
    Extends the capabilities of the ModelBuilder + with the ability to define multilayer (e.g.
    +
    + +
    +
    Implements a NNOperation that outputs the natural logarithm of its single input.
    +
    + +
    +
    Provides computation and (partial) derivation of popular activation functions + and cross-entropy loss functions.
    +
    + +
    +
    This class provides an abstract implementation of loss functions + to be used during Model training.
    +
    + +
    +
    Implements a NNOperation that performs a leaky relu operation, where the first argument is a tensor on which + it is applied and the second one should be a tensor wrapping a double value (consider initializing this with as a + Constant holding a tensor generated with Tensor.fromDouble(double)) where + the wrapped value indicates the negative region's slope.
    +
    + +
     
    + +
     
    + +
    +
    Implements a NNOperation that multiplies its two matrix inputs.
    +
    + +
    +
    This class provides an abstract implementation of Matrix functionalities.
    +
    + +
    +
    Implements a NNOperation that performs row-wise or column-wise + maximum reduction on vector tensors or matrices.
    +
    + +
    +
    Implements a NNOperation that performs row-wise or column-wise + mean reduction on vector tensors or matrices.
    +
    + +
    +
    A memory management system for thread-safe allocation and release of arrays of doubles.
    +
    + +
     
    + +
    +
    This class is a way to organize NNOperation trees into trainable machine + learning models.
    +
    + +
    +
    This class and subclasses can be used to create Model instances + by automatically creating and managing NNOperation instances based on + textual descriptions.
    +
    + +
    +
    This is a helper class that automates the definition of training processes of Model instances + by defining the number of epochs, loss functions, number of batches and the ability to use ThreadPool + for parallelized batch computations.
    +
    + +
    +
    Implements a NNOperation that multiplies its two inputs element-by-element.
    +
    + +
     
    + +
    +
    Implements a NNOperation that performs an exponential transformation of + its single input, but only on the non-zero elements.
    +
    + +
    +
    This class defines an abstract neural network operation with forward and + backpropagation capabilities.
    +
    + +
    +
    Implements a Normal Distribution of given mean and standard deviation.
    +
    + +
    +
    Provides an interface for training tensors.
    +
    + +
    +
    Implements a NNOperation that holds and returns a parameter tensor.
    +
    + +
     
    + +
    +
    Downloads and constructs the Pubmed node classification Dataset.
    +
    + +
    +
    Implements an iterator that traverses a range (similar to Python's range(min, max) method).
    +
    + +
    +
    Implements an iterator that traverses a two-dimensional range (min, max) x (min2, max2).
    +
    + +
     
    + +
    +
    Wraps an Optimizer by applying the derivative of L2 loss + on every tensor during Optimizer.update(Tensor, Tensor).
    +
    + +
    +
    Implements a NNOperation that performs a relu transformation of its single input first introduced by + Hahnloser, Richard HR, Rahul Sarpeshkar, Misha A.
    +
    + +
    +
    Implements a NNOperation that converts its first argument to a ColumnRepetition matrix + with a number of columns equal to the second argument.
    +
    + +
    +
    Implements a Matrix whose elements are all equals.
    +
    + +
    +
    This class provides Tensor whose elements are all equal.
    +
    + +
    +
    Implements a NNOperation that reshapes a matrix.
    +
    + +
    +
    Defines a matrix whose rows are all a copy of a Tensor.
    +
    + +
    +
    Implements a NNOperation that performs a sigmoid transformation of its single input.
    +
    + +
    +
    This class provices an interface with which to define data slices, + for instance to sample labels.
    +
    + +
    +
    Implements a NNOperation that performs row-wise or column-wise + softmax on vector tensors or matrices.
    +
    + +
     
    + +
     
    + +
    +
    A sparse Matrix that allocates memory only for non-zero elements.
    +
    + +
    Deprecated. +
    Under development.
    +
    + +
    +
    This class provides a sparse Tensor with many zero elements.
    +
    + +
    +
    Implements a NNOperation that performs row-wise or column-wise + sum reduction on vector tensors or matrices.
    +
    + +
    +
    Implements a NNOperation that performs a tanh transformation of its single input.
    +
    + +
    +
    This class provides a native java implementation of Tensor functionalities.
    +
    + +
    +
    This class provides thread execution pool utilities while keeping track of thread + identifiers for use by thread-specific NNOperation.
    +
    + +
    +
    Implements a NNOperation that lists the second element of the 2D matrix element iterator.
    +
    + +
    +
    Implements a NNOperation that performs matrix transposition.
    +
    + +
    +
    Generates a transposed version of a base matrix, with which it shares elements.
    +
    + +
    +
    Implements a Uniform Distribution of given bounds.
    +
    + +
    +
    Implements a NNOperation that represents Model inputs.
    +
    + +
    +
    This class describes a broad class of Initializer strategies, in which + dense neural layer initialization is controlled so that variance is mostly preserved from + inputs to outputs to avoid vanishing or exploding gradients in the first training + runs.
    +
    + +
    +
    Implements a dense Matrix where all elements are stored in memory.
    +
    + +
    +
    This class provides a dense Tensor that wraps an array of doubles.
    +
    + +
    +
    Implements a Loss that wraps other losses and outputs their value during training to an output stream + (to System.out by default).
    +
    + +
    +
    Wraps a list of tensors into a matrix with the tensors as columns.
    +
    + +
    +
    Wraps a list of tensors into a matrix with the tensors as rows.
    +
    + + + + +
    +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/allpackages-index.html b/docs/javadoc/allpackages-index.html new file mode 100644 index 00000000..b184a7fe --- /dev/null +++ b/docs/javadoc/allpackages-index.html @@ -0,0 +1,100 @@ + + + + +All Packages + + + + + + + + + + + + + + + + + + diff --git a/docs/javadoc/deprecated-list.html b/docs/javadoc/deprecated-list.html new file mode 100644 index 00000000..9504a209 --- /dev/null +++ b/docs/javadoc/deprecated-list.html @@ -0,0 +1,126 @@ + + + + +Deprecated List + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Deprecated API

    +

    Contents

    + +
    + + + +
    +
    +
    + + diff --git a/docs/javadoc/element-list b/docs/javadoc/element-list new file mode 100644 index 00000000..8783cac7 --- /dev/null +++ b/docs/javadoc/element-list @@ -0,0 +1,18 @@ +mklab.JGNN.adhoc +mklab.JGNN.adhoc.datasets +mklab.JGNN.adhoc.parsers +mklab.JGNN.core +mklab.JGNN.core.distribution +mklab.JGNN.core.empy +mklab.JGNN.core.matrix +mklab.JGNN.core.tensor +mklab.JGNN.core.util +mklab.JGNN.nn +mklab.JGNN.nn.activations +mklab.JGNN.nn.initializers +mklab.JGNN.nn.inputs +mklab.JGNN.nn.loss +mklab.JGNN.nn.loss.report +mklab.JGNN.nn.operations +mklab.JGNN.nn.optimizers +mklab.JGNN.nn.pooling diff --git a/docs/javadoc/help-doc.html b/docs/javadoc/help-doc.html new file mode 100644 index 00000000..bdd7d91b --- /dev/null +++ b/docs/javadoc/help-doc.html @@ -0,0 +1,188 @@ + + + + +API Help + + + + + + + + + + + + + + + +
    + +
    +
    +

    JavaDoc Help

    + +
    +
    +

    Navigation

    +Starting from the Overview page, you can browse the documentation using the links in each page, and in the navigation bar at the top of each page. The Index and Search box allow you to navigate to specific declarations and summary pages, including: All Packages, All Classes and Interfaces + +
    +
    +
    +

    Kinds of Pages

    +The following sections describe the different kinds of pages in this collection. +
    +

    Overview

    +

    The Overview page is the front page of this API document and provides a list of all packages with a summary for each. This page can also contain an overall description of the set of packages.

    +
    +
    +

    Package

    +

    Each package has a page that contains a list of its classes and interfaces, with a summary for each. These pages may contain the following categories:

    +
      +
    • Interfaces
    • +
    • Classes
    • +
    • Enum Classes
    • +
    • Exceptions
    • +
    • Errors
    • +
    • Annotation Interfaces
    • +
    +
    +
    +

    Class or Interface

    +

    Each class, interface, nested class and nested interface has its own separate page. Each of these pages has three sections consisting of a declaration and description, member summary tables, and detailed member descriptions. Entries in each of these sections are omitted if they are empty or not applicable.

    +
      +
    • Class Inheritance Diagram
    • +
    • Direct Subclasses
    • +
    • All Known Subinterfaces
    • +
    • All Known Implementing Classes
    • +
    • Class or Interface Declaration
    • +
    • Class or Interface Description
    • +
    +
    +
      +
    • Nested Class Summary
    • +
    • Enum Constant Summary
    • +
    • Field Summary
    • +
    • Property Summary
    • +
    • Constructor Summary
    • +
    • Method Summary
    • +
    • Required Element Summary
    • +
    • Optional Element Summary
    • +
    +
    +
      +
    • Enum Constant Details
    • +
    • Field Details
    • +
    • Property Details
    • +
    • Constructor Details
    • +
    • Method Details
    • +
    • Element Details
    • +
    +

    Note: Annotation interfaces have required and optional elements, but not methods. Only enum classes have enum constants. The components of a record class are displayed as part of the declaration of the record class. Properties are a feature of JavaFX.

    +

    The summary entries are alphabetical, while the detailed descriptions are in the order they appear in the source code. This preserves the logical groupings established by the programmer.

    +
    +
    +

    Other Files

    +

    Packages and modules may contain pages with additional information related to the declarations nearby.

    +
    +
    +

    Use

    +

    Each documented package, class and interface has its own Use page. This page describes what packages, classes, methods, constructors and fields use any part of the given class or package. Given a class or interface A, its Use page includes subclasses of A, fields declared as A, methods that return A, and methods and constructors with parameters of type A. You can access this page by first going to the package, class or interface, then clicking on the USE link in the navigation bar.

    +
    +
    +

    Tree (Class Hierarchy)

    +

    There is a Class Hierarchy page for all packages, plus a hierarchy for each package. Each hierarchy page contains a list of classes and a list of interfaces. Classes are organized by inheritance structure starting with java.lang.Object. Interfaces do not inherit from java.lang.Object.

    +
      +
    • When viewing the Overview page, clicking on TREE displays the hierarchy for all packages.
    • +
    • When viewing a particular package, class or interface page, clicking on TREE displays the hierarchy for only that package.
    • +
    +
    +
    +

    Deprecated API

    +

    The Deprecated API page lists all of the API that have been deprecated. A deprecated API is not recommended for use, generally due to shortcomings, and a replacement API is usually given. Deprecated APIs may be removed in future implementations.

    +
    +
    +

    All Packages

    +

    The All Packages page contains an alphabetic index of all packages contained in the documentation.

    +
    +
    +

    All Classes and Interfaces

    +

    The All Classes and Interfaces page contains an alphabetic index of all classes and interfaces contained in the documentation, including annotation interfaces, enum classes, and record classes.

    +
    +
    +

    Index

    +

    The Index contains an alphabetic index of all classes, interfaces, constructors, methods, and fields in the documentation, as well as summary pages such as All Packages, All Classes and Interfaces.

    +
    +
    +
    +This help file applies to API documentation generated by the standard doclet.
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-1.html b/docs/javadoc/index-files/index-1.html new file mode 100644 index 00000000..edd5fcbc --- /dev/null +++ b/docs/javadoc/index-files/index-1.html @@ -0,0 +1,287 @@ + + + + +A-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    A

    +
    +
    abs() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the absolute value of tensor elements.
    +
    +
    abs() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    accessCol(long) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves the given column as a tensor.
    +
    +
    accessCol(long) - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    AccessCol - Class in mklab.JGNN.core.matrix
    +
    +
    Accesses a column of a Matrix as if it were a dense Tensor.
    +
    +
    AccessCol(Matrix, long) - Constructor for class mklab.JGNN.core.matrix.AccessCol
    +
    +
    Instantiates a see-through access of a matrix column.
    +
    +
    accessColumns() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes specific matrix columns to a list of tensors that share entries.
    +
    +
    accessColumns(long...) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes specific matrix columns to a list of tensors that share entries.
    +
    +
    accessColumns(Iterable<Long>) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes some matrix columns to a list of tensors that share entries.
    +
    +
    accessColumns(Tensor) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes matrix columns to a list of tensors that share entries.
    +
    +
    accessDim(long, String) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves either the given row or column as a trensor.
    +
    +
    accessRow(long) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves the given row as a tensor.
    +
    +
    accessRow(long) - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    AccessRow - Class in mklab.JGNN.core.matrix
    +
    +
    Accesses a row of a Matrix as if it were a dense Tensor.
    +
    +
    AccessRow(Matrix, long) - Constructor for class mklab.JGNN.core.matrix.AccessRow
    +
    +
    Instantiates a see-through access of a matrix row.
    +
    +
    accessRows() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes matrix rows to a list of tensors that share entries.
    +
    +
    accessRows(long...) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes specific matrix rows to a list of tensors that share entries.
    +
    +
    accessRows(Iterable<Long>) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes some matrix rows to a list of tensors that share entries.
    +
    +
    accessRows(Tensor) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Organizes specific matrix rows to a list of tensors that share entries.
    +
    +
    accessSubtensor(long) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Wraps a range of elements within a tensor + without allocating memory anew.
    +
    +
    accessSubtensor(long, long) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Wraps a range of elements within a tensor + without allocating memory anew.
    +
    +
    AccessSubtensor - Class in mklab.JGNN.core.tensor
    +
    +
    Wraps a base Tensor by traversing only its elements in a specified range (from begin, up to end-1).
    +
    +
    AccessSubtensor(Tensor, long) - Constructor for class mklab.JGNN.core.tensor.AccessSubtensor
    +
    +
    Instantiates a see-through access of a tensor elements.
    +
    +
    AccessSubtensor(Tensor, long, long) - Constructor for class mklab.JGNN.core.tensor.AccessSubtensor
    +
    +
    Instantiates a see-through access of a tensor elements.
    +
    +
    Accuracy - Class in mklab.JGNN.nn.loss
    +
    +
    Implements an accuracy Loss of row-by-row comparisons.
    +
    +
    Accuracy() - Constructor for class mklab.JGNN.nn.loss.Accuracy
    +
    +
    Instantiates a row-by-row Accuracy loss.
    +
    +
    Adam - Class in mklab.JGNN.nn.optimizers
    +
    +
    Thic class implements an Adam Optimizer as explained in the paper: + Kingma, Diederik P., and Jimmy Ba.
    +
    +
    Adam() - Constructor for class mklab.JGNN.nn.optimizers.Adam
    +
    +
    Initializes an NDAdam instance of an Adam optimizer with the default parameters recommended by the papers.
    +
    +
    Adam(boolean, double) - Constructor for class mklab.JGNN.nn.optimizers.Adam
    +
    +
    Initializes an Adam optimizer with the default parameters recommended by the papers + but allows for the specification of the learning rate and whether NDAdam or simple Adam + is used.
    +
    +
    Adam(boolean, double, double, double) - Constructor for class mklab.JGNN.nn.optimizers.Adam
    +
    +
    Initializes an NDAdam instance of an Adam optimizer with the default parameters with custom parameters.
    +
    +
    Adam(boolean, double, double, double, double) - Constructor for class mklab.JGNN.nn.optimizers.Adam
    +
     
    +
    Adam(double) - Constructor for class mklab.JGNN.nn.optimizers.Adam
    +
    +
    Initializes an NDAdam instance of an Adam optimizer with the default parameters recommended by the papers + but allows for the specification of the learning rate.
    +
    +
    add(double) - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    add(double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    add(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    add(Tensor) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    add(Tensor) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    Add - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that adds its two inputs.
    +
    +
    Add() - Constructor for class mklab.JGNN.nn.operations.Add
    +
     
    +
    addInput(Variable) - Method in class mklab.JGNN.nn.Model
    +
    +
    Adds to the model's inputs the provided Variable.
    +
    +
    addInput(NNOperation) - Method in class mklab.JGNN.nn.inputs.Parameter
    +
     
    +
    addInput(NNOperation) - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    addOutput(NNOperation) - Method in class mklab.JGNN.nn.Model
    +
    +
    Adds to the model's output the output of the provided operation.
    +
    +
    aggregate(LSTM) - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    allocate(int, Object) - Static method in class mklab.JGNN.core.Memory
    +
     
    +
    apply(Model) - Method in class mklab.JGNN.nn.Initializer
    +
    +
    Applies the initializer to a given model's parameters.
    +
    +
    apply(Model) - Method in class mklab.JGNN.nn.initializers.VariancePreservingInitializer
    +
     
    +
    argmax() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the position of the maximum tensor element.
    +
    +
    argmin() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the position of the minimum tensor element.
    +
    +
    asColumn() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Accesses the tensor through a single-column matrix with the tensor as the only row.
    +
    +
    asRow() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Accesses the tensor through a single-row matrix with the tensor as the only column.
    +
    +
    assertBackwardValidity() - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Asserts that all components parsed into a call graph with + ModelBuilder.operation(String) are eventually used by at least one ModelBuilder.out(String) + component.
    +
    +
    assertFinite() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Asserts that the tensor holds only finite values.
    +
    +
    assertMatching(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Asserts that the tensor's dimensions match with another tensor.
    +
    +
    assertSize(long) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Asserts that the tensor's Tensor.size() matches the given size.
    +
    +
    assign(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs a sparse assignment.
    +
    +
    asTensor() - Method in class mklab.JGNN.core.Slice
    +
    +
    Creates a dense tensor holding the slice's identifiers.
    +
    +
    asTransposed() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a transposed version of the matrix that accesses the same elements (thus, editing one + edits the other) without allocating additional memory.
    +
    +
    asTransposed() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    Attention - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that creates a version of adjacency matrices + with column-wise attention involving neighbor similarity.
    +
    +
    Attention() - Constructor for class mklab.JGNN.nn.operations.Attention
    +
     
    +
    autosize(List<Tensor>) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    autosize(Tensor...) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-10.html b/docs/javadoc/index-files/index-10.html new file mode 100644 index 00000000..b53e46d9 --- /dev/null +++ b/docs/javadoc/index-files/index-10.html @@ -0,0 +1,75 @@ + + + + +K-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    K

    +
    +
    KaimingNormal - Class in mklab.JGNN.nn.initializers
    +
    + +
    +
    KaimingNormal() - Constructor for class mklab.JGNN.nn.initializers.KaimingNormal
    +
     
    +
    KaimingUniform - Class in mklab.JGNN.nn.initializers
    +
    + +
    +
    KaimingUniform() - Constructor for class mklab.JGNN.nn.initializers.KaimingUniform
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-11.html b/docs/javadoc/index-files/index-11.html new file mode 100644 index 00000000..aad17943 --- /dev/null +++ b/docs/javadoc/index-files/index-11.html @@ -0,0 +1,148 @@ + + + + +L-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    L

    +
    +
    L1 - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs a L1 transformation of its single input + by row or column.
    +
    +
    L1() - Constructor for class mklab.JGNN.nn.activations.L1
    +
     
    +
    L1(boolean) - Constructor for class mklab.JGNN.nn.activations.L1
    +
     
    +
    labels() - Method in class mklab.JGNN.adhoc.Dataset
    +
    +
    Retrieves the dataset's sample labels in one-hot encoding.
    +
    +
    layer(String) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Applies an FastBuilder.operation(String) and increases the layer identifier count.
    +
    +
    layer(String) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Applies an LayeredBuilder.operation(String) and increases the layer identifier count.
    +
    +
    LayeredBuilder - Class in mklab.JGNN.adhoc.parsers
    +
    +
    Extends the capabilities of the ModelBuilder + with the ability to define multilayer (e.g.
    +
    +
    LayeredBuilder() - Constructor for class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Instantiates a layered builder with input name h0.
    +
    +
    LayeredBuilder(String) - Constructor for class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Instantiates a layered builder with the given symbol as an input name.
    +
    +
    layerRepeat(String, int) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Repeats a FastBuilder.layer(String) definition a number of times.
    +
    +
    layerRepeat(String, int) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Repeats a LayeredBuilder.layer(String) definition a number of times.
    +
    +
    load(Path) - Static method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    log() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    log() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the logarithm of tensor elements.
    +
    +
    Log - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that outputs the natural logarithm of its single input.
    +
    +
    Log() - Constructor for class mklab.JGNN.nn.operations.Log
    +
     
    +
    Loss - Class in mklab.JGNN.nn
    +
    +
    This class provides an abstract implementation of loss functions + to be used during Model training.
    +
    +
    Loss - Interface in mklab.JGNN.core.util
    +
    +
    Provides computation and (partial) derivation of popular activation functions + and cross-entropy loss functions.
    +
    +
    Loss() - Constructor for class mklab.JGNN.nn.Loss
    +
     
    +
    LRelu - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs a leaky relu operation, where the first argument is a tensor on which + it is applied and the second one should be a tensor wrapping a double value (consider initializing this with as a + Constant holding a tensor generated with Tensor.fromDouble(double)) where + the wrapped value indicates the negative region's slope.
    +
    +
    LRelu() - Constructor for class mklab.JGNN.nn.activations.LRelu
    +
     
    +
    LSTM - Class in mklab.JGNN.nn.operations
    +
     
    +
    LSTM(Optimizer, int, int) - Constructor for class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    LSTM.LSTMState - Class in mklab.JGNN.nn.operations
    +
     
    +
    LSTMState(Tensor, Tensor) - Constructor for class mklab.JGNN.nn.operations.LSTM.LSTMState
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-12.html b/docs/javadoc/index-files/index-12.html new file mode 100644 index 00000000..2c2a2fbb --- /dev/null +++ b/docs/javadoc/index-files/index-12.html @@ -0,0 +1,208 @@ + + + + +M-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    M

    +
    +
    matmul(Matrix) - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    matmul(Matrix) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Performs the matrix multiplication of this*with and the recipient.
    +
    +
    matmul(Matrix) - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    matmul(Matrix, boolean, boolean) - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    matmul(Matrix, boolean, boolean) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Can be used to perform fast computation of the matrix multiplications +
    this*with, +
    this.transposed()*with +
    this*with.transposed(), +
    this.transposed()*with.transposed() +
    while avoiding the overhead of calling + Matrix.transposed().
    +
    +
    matmul(Matrix, boolean, boolean) - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    MatMul - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that multiplies its two matrix inputs.
    +
    +
    MatMul() - Constructor for class mklab.JGNN.nn.operations.MatMul
    +
     
    +
    Matrix - Class in mklab.JGNN.core
    +
    +
    This class provides an abstract implementation of Matrix functionalities.
    +
    +
    max() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the maximum tensor element.
    +
    +
    Max - Class in mklab.JGNN.nn.pooling
    +
    +
    Implements a NNOperation that performs row-wise or column-wise + maximum reduction on vector tensors or matrices.
    +
    +
    Max() - Constructor for class mklab.JGNN.nn.pooling.Max
    +
     
    +
    Max(boolean) - Constructor for class mklab.JGNN.nn.pooling.Max
    +
     
    +
    Mean - Class in mklab.JGNN.nn.pooling
    +
    +
    Implements a NNOperation that performs row-wise or column-wise + mean reduction on vector tensors or matrices.
    +
    +
    Mean() - Constructor for class mklab.JGNN.nn.pooling.Mean
    +
     
    +
    Mean(boolean) - Constructor for class mklab.JGNN.nn.pooling.Mean
    +
     
    +
    Memory - Class in mklab.JGNN.core
    +
    +
    A memory management system for thread-safe allocation and release of arrays of doubles.
    +
    +
    Memory() - Constructor for class mklab.JGNN.core.Memory
    +
     
    +
    Memory.Scope - Class in mklab.JGNN.core
    +
     
    +
    min() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the minimum tensor element.
    +
    +
    mklab.JGNN.adhoc - package mklab.JGNN.adhoc
    +
     
    +
    mklab.JGNN.adhoc.datasets - package mklab.JGNN.adhoc.datasets
    +
     
    +
    mklab.JGNN.adhoc.parsers - package mklab.JGNN.adhoc.parsers
    +
     
    +
    mklab.JGNN.core.distribution - package mklab.JGNN.core.distribution
    +
     
    +
    mklab.JGNN.core.empy - package mklab.JGNN.core.empy
    +
     
    +
    mklab.JGNN.core.matrix - package mklab.JGNN.core.matrix
    +
     
    +
    mklab.JGNN.core.tensor - package mklab.JGNN.core.tensor
    +
     
    +
    mklab.JGNN.nn - package mklab.JGNN.nn
    +
     
    +
    mklab.JGNN.nn.activations - package mklab.JGNN.nn.activations
    +
     
    +
    mklab.JGNN.nn.initializers - package mklab.JGNN.nn.initializers
    +
     
    +
    mklab.JGNN.nn.inputs - package mklab.JGNN.nn.inputs
    +
     
    +
    mklab.JGNN.nn.loss - package mklab.JGNN.nn.loss
    +
     
    +
    mklab.JGNN.nn.loss.report - package mklab.JGNN.nn.loss.report
    +
     
    +
    mklab.JGNN.nn.operations - package mklab.JGNN.nn.operations
    +
     
    +
    mklab.JGNN.nn.optimizers - package mklab.JGNN.nn.optimizers
    +
     
    +
    mklab.JGNN.nn.pooling - package mklab.JGNN.nn.pooling
    +
     
    +
    Model - Class in mklab.JGNN.nn
    +
    +
    This class is a way to organize NNOperation trees into trainable machine + learning models.
    +
    +
    Model() - Constructor for class mklab.JGNN.nn.Model
    +
    +
    Deprecated. +
    Prefer using model builders for symbolic model definitions.
    +
    +
    +
    ModelBuilder - Class in mklab.JGNN.adhoc
    +
    +
    This class and subclasses can be used to create Model instances + by automatically creating and managing NNOperation instances based on + textual descriptions.
    +
    +
    ModelBuilder() - Constructor for class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    ModelBuilder(Model) - Constructor for class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    ModelTraining - Class in mklab.JGNN.nn
    +
    +
    This is a helper class that automates the definition of training processes of Model instances + by defining the number of epochs, loss functions, number of batches and the ability to use ThreadPool + for parallelized batch computations.
    +
    +
    ModelTraining() - Constructor for class mklab.JGNN.nn.ModelTraining
    +
     
    +
    multiply(double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    multiply(double) - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    multiply(double) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    multiply(Tensor) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    multiply(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    multiply(Tensor) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    Multiply - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that multiplies its two inputs element-by-element.
    +
    +
    Multiply() - Constructor for class mklab.JGNN.nn.operations.Multiply
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-13.html b/docs/javadoc/index-files/index-13.html new file mode 100644 index 00000000..b675ede8 --- /dev/null +++ b/docs/javadoc/index-files/index-13.html @@ -0,0 +1,105 @@ + + + + +N-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    N

    +
    +
    negative() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    negative() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the negative of tensor elements.
    +
    +
    Neuralang - Class in mklab.JGNN.adhoc.parsers
    +
     
    +
    Neuralang() - Constructor for class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    NExp - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs an exponential transformation of + its single input, but only on the non-zero elements.
    +
    +
    NExp() - Constructor for class mklab.JGNN.nn.activations.NExp
    +
     
    +
    next() - Method in class mklab.JGNN.core.util.Range
    +
     
    +
    next() - Method in class mklab.JGNN.core.util.Range2D
    +
     
    +
    NNOperation - Class in mklab.JGNN.nn
    +
    +
    This class defines an abstract neural network operation with forward and + backpropagation capabilities.
    +
    +
    norm() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    Normal - Class in mklab.JGNN.core.distribution
    +
    +
    Implements a Normal Distribution of given mean and standard deviation.
    +
    +
    Normal() - Constructor for class mklab.JGNN.core.distribution.Normal
    +
    +
    Instantiates a normal distribution with zero mean and standard deviation equal to 1.
    +
    +
    Normal(double, double) - Constructor for class mklab.JGNN.core.distribution.Normal
    +
    +
    Instantiates a normal distribution with a given mean and standard deviation.
    +
    +
    normalized() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-14.html b/docs/javadoc/index-files/index-14.html new file mode 100644 index 00000000..3b3e4b39 --- /dev/null +++ b/docs/javadoc/index-files/index-14.html @@ -0,0 +1,89 @@ + + + + +O-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    O

    +
    +
    onesMask() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Produces a mask that indicates the non-zero elements of the matrix.
    +
    +
    operation(String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Parses one or more operations split by new line characters or ; + to add to the execution graph.
    +
    +
    operation(String) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    operation(String) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    Optimizer - Interface in mklab.JGNN.nn
    +
    +
    Provides an interface for training tensors.
    +
    +
    out(String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares the component with the given name an output of the + managed model.
    +
    +
    out(String) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    output(Tensor, LSTM.LSTMState) - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-15.html b/docs/javadoc/index-files/index-15.html new file mode 100644 index 00000000..4723ca0d --- /dev/null +++ b/docs/javadoc/index-files/index-15.html @@ -0,0 +1,228 @@ + + + + +P-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    P

    +
    +
    param(String, double, Tensor) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares a learnable Paramater component with the given name, + learning L2 regularization, and initial value.
    +
    +
    param(String, double, Tensor) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    param(String, double, Tensor) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    param(String, Tensor) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares a learnable mklab.JGNN.nn.inputs.Paramater component with the given name, + zero regularization, and initial value.
    +
    +
    param(String, Tensor) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    param(String, Tensor) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    Parameter - Class in mklab.JGNN.nn.inputs
    +
    +
    Implements a NNOperation that holds and returns a parameter tensor.
    +
    +
    Parameter(Tensor) - Constructor for class mklab.JGNN.nn.inputs.Parameter
    +
     
    +
    Parameter(Tensor, double) - Constructor for class mklab.JGNN.nn.inputs.Parameter
    +
     
    +
    parse(String) - Method in class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    parse(Path) - Method in class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    persist() - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    persist() - Method in class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    persist() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    persist() - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    persist() - Method in class mklab.JGNN.core.tensor.AccessSubtensor
    +
     
    +
    persist() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    persist() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Deprecated. +
    This method may not be present in future versions + of the library, depending on whether memory reuse proves useful or nor.
    +
    +
    +
    persist() - Method in class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    persist() - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    persist() - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    predict(List<Tensor>) - Method in class mklab.JGNN.nn.Model
    +
    +
    Forward run of the model given a list of input tensors.
    +
    +
    predict(Tensor...) - Method in class mklab.JGNN.nn.Model
    +
    +
    Forward run of the model given an array of input tensors.
    +
    +
    predict(Tensor[]) - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    PRelu - Class in mklab.JGNN.nn.activations
    +
     
    +
    PRelu() - Constructor for class mklab.JGNN.nn.activations.PRelu
    +
     
    +
    print() - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    printState() - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    Pubmed - Class in mklab.JGNN.adhoc.datasets
    +
    +
    Downloads and constructs the Pubmed node classification Dataset.
    +
    +
    Pubmed() - Constructor for class mklab.JGNN.adhoc.datasets.Pubmed
    +
     
    +
    put(int, double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    put(int, double) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.tensor.AccessSubtensor
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Assign a value to a tensor element.
    +
    +
    put(long, double) - Method in class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    put(long, double) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    put(long, long, double) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Stores values at matrix elements.
    +
    +
    putAdd(int, double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    putAdd(int, double) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    putAdd(long, double) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Add a value to a tensor element.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-16.html b/docs/javadoc/index-files/index-16.html new file mode 100644 index 00000000..272cf52b --- /dev/null +++ b/docs/javadoc/index-files/index-16.html @@ -0,0 +1,262 @@ + + + + +R-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    R

    +
    +
    range(double, double) - Method in class mklab.JGNN.core.Slice
    +
    +
    Performs the Slice.range(int, int) operation + while replacing values of from and end + with (int)(from*size()) and (int)(end*size()) + so that fractional ranges can be obtained.
    +
    +
    range(int, int) - Method in class mklab.JGNN.core.Slice
    +
    +
    Obtains the identifiers in a given range of the (shuffled) slice.
    +
    +
    Range - Class in mklab.JGNN.core.util
    +
    +
    Implements an iterator that traverses a range (similar to Python's range(min, max) method).
    +
    +
    Range(long, long) - Constructor for class mklab.JGNN.core.util.Range
    +
     
    +
    Range2D - Class in mklab.JGNN.core.util
    +
    +
    Implements an iterator that traverses a two-dimensional range (min, max) x (min2, max2).
    +
    +
    Range2D(long, long, long, long) - Constructor for class mklab.JGNN.core.util.Range2D
    +
     
    +
    Reduce - Class in mklab.JGNN.nn.operations
    +
     
    +
    Reduce() - Constructor for class mklab.JGNN.nn.operations.Reduce
    +
     
    +
    register(double[]) - Method in class mklab.JGNN.core.Memory.Scope
    +
     
    +
    Regularization - Class in mklab.JGNN.nn.optimizers
    +
    +
    Wraps an Optimizer by applying the derivative of L2 loss + on every tensor during Optimizer.update(Tensor, Tensor).
    +
    +
    Regularization(Optimizer, double) - Constructor for class mklab.JGNN.nn.optimizers.Regularization
    +
    +
    Initializes a Regularization.
    +
    +
    release() - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    release() - Method in class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    release() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    release() - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    release() - Method in class mklab.JGNN.core.tensor.AccessSubtensor
    +
     
    +
    release() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    release() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Deprecated. +
    This method may not be present in future versions + of the library, depending on whether memory reuse proves useful or nor.
    +
    +
    +
    release() - Method in class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    release() - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    release() - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    release(double[]) - Static method in class mklab.JGNN.core.Memory
    +
     
    +
    relu(double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The relu activation x if x > 0, 0 otherwise
    +
    +
    relu(Tensor) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    Applies Loss.relu(double) element-by-element.
    +
    +
    Relu - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs a relu transformation of its single input first introduced by + Hahnloser, Richard HR, Rahul Sarpeshkar, Misha A.
    +
    +
    Relu() - Constructor for class mklab.JGNN.nn.activations.Relu
    +
     
    +
    reluDerivative(double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of the Loss.relu(double) function.
    +
    +
    reluDerivative(Tensor) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    Applies Loss.reluDerivative(double) function.
    +
    +
    rememberAs(String) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Remembers the last layer's output per a given identifier so that {layerId} + within future FastBuilder.layer(String) definitions is made to refer to the + current layer.
    +
    +
    rememberAs(String) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Sets the current layer identifier to a specific symbol layerId + so that future usage of {layerId} is automatically replaced with + the identifier.
    +
    +
    Repeat - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that converts its first argument to a ColumnRepetition matrix + with a number of columns equal to the second argument.
    +
    +
    Repeat() - Constructor for class mklab.JGNN.nn.operations.Repeat
    +
     
    +
    RepeatMatrix - Class in mklab.JGNN.core.matrix
    +
    +
    Implements a Matrix whose elements are all equals.
    +
    +
    RepeatMatrix(double, long, long) - Constructor for class mklab.JGNN.core.matrix.RepeatMatrix
    +
    +
    Generates a dense matrix with the designated number of rows and columns.
    +
    +
    RepeatTensor - Class in mklab.JGNN.core.tensor
    +
    +
    This class provides Tensor whose elements are all equal.
    +
    +
    RepeatTensor(double, long) - Constructor for class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    reset() - Method in class mklab.JGNN.nn.loss.report.VerboseLoss
    +
     
    +
    reset() - Method in interface mklab.JGNN.nn.Optimizer
    +
    +
    Resets (and lets the garbage collector free) optimizer memory.
    +
    +
    reset() - Method in class mklab.JGNN.nn.optimizers.Adam
    +
     
    +
    reset() - Method in class mklab.JGNN.nn.optimizers.BatchOptimizer
    +
     
    +
    reset() - Method in class mklab.JGNN.nn.optimizers.GradientDescent
    +
     
    +
    reset() - Method in class mklab.JGNN.nn.optimizers.Regularization
    +
     
    +
    Reshape - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that reshapes a matrix.
    +
    +
    Reshape(long, long) - Constructor for class mklab.JGNN.nn.operations.Reshape
    +
     
    +
    RowRepetition - Class in mklab.JGNN.core.matrix
    +
    +
    Defines a matrix whose rows are all a copy of a Tensor.
    +
    +
    RowRepetition(Tensor, long) - Constructor for class mklab.JGNN.core.matrix.RowRepetition
    +
    +
    Instantiates a matrix repeating a tensor to be treated as a row.
    +
    +
    run(List<Tensor>) - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Performs a forward pass in the operation without inducing any kind of + learning or storing the outcome.
    +
    +
    run(Tensor...) - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Performs a forward pass in the operation without inducing any kind of + learning or storing the outcome.
    +
    +
    runModel(ArrayList<Tensor>) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    This is a wrapper for getModel().predict(inputs) + without returning output values (use ModelBuilder.get(String) + afterwards to view outputs.
    +
    +
    runModel(Tensor...) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    This is a wrapper for getModel().predict(inputs) + without returning output values (use ModelBuilder.get(String) + afterwards to view outputs.
    +
    +
    runPrediction() - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    runPredictionAndAutosize() - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-17.html b/docs/javadoc/index-files/index-17.html new file mode 100644 index 00000000..3617c7e6 --- /dev/null +++ b/docs/javadoc/index-files/index-17.html @@ -0,0 +1,473 @@ + + + + +S-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    S

    +
    +
    sample() - Method in class mklab.JGNN.core.distribution.Normal
    +
     
    +
    sample() - Method in interface mklab.JGNN.core.Distribution
    +
    +
    Retrieves a new sample from the distribution.
    +
    +
    sample() - Method in class mklab.JGNN.core.distribution.Uniform
    +
     
    +
    samples() - Method in class mklab.JGNN.adhoc.Dataset
    +
    +
    Retrieves a converter that maps samples to long identifiers that match them to + rows of Dataset.features(), Dataset.labels(), and Dataset.graph() matrices.
    +
    +
    samplesAsFeatures() - Method in class mklab.JGNN.core.Slice
    +
    +
    Constructs a column matrix holding identifiers in + the range 0,1,..Slice.size()-1 so that the pattern + slice.samplesAsFeatures().accessRows(slice.range(from, end)) + retrieves one-element tensors holding + slice[from], slice[from+1], ...
    +
    +
    save(Path) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    scope() - Static method in class mklab.JGNN.core.Memory
    +
     
    +
    selfAbs() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfAbs() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory set of each element to its absolute value.
    +
    +
    selfAdd(double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfAdd(double) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory addition to the Tensor, storing the result in itself.
    +
    +
    selfAdd(Tensor) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfAdd(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory addition to the Tensor, storing the result in itself.
    +
    +
    selfAdd(Tensor) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    selfAdd(Tensor, double) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory weighted addition to the Tensor, storing the result in itself.
    +
    +
    selfExpMinusOne() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfExpMinusOne() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Sets the exponential minus 1 of tensor elements.
    +
    +
    selfInverse() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfInverse() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory the inverse of each non-zero element.
    +
    +
    selfLog() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfLog() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory set of each element to the logarithm of its absolute value.
    +
    +
    selfMultiply(double) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfMultiply(double) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory multiplication on the Tensor, storing the result to itself.
    +
    +
    selfMultiply(double) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    selfMultiply(Tensor) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfMultiply(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory multiplication on the Tensor, storing the result in itself .
    +
    +
    selfMultiply(Tensor) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    selfNegative() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfNegative() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory set of each element to the negative of itself.
    +
    +
    selfSqrt() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfSqrt() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory set of each element to the square root of its absolute value.
    +
    +
    selfSubtract(Tensor) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    selfSubtract(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs in-memory subtraction from the Tensor, storing the result in itself.
    +
    +
    selfSubtract(Tensor) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    set(Tensor) - Method in class mklab.JGNN.nn.inputs.Parameter
    +
    +
    Forcefully sets the parameter's value tensor to the desired value.
    +
    +
    setColName(String) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets a name for the matrix's column dimension.
    +
    +
    setDescription(String) - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    setDeviation(double) - Method in class mklab.JGNN.core.distribution.Normal
    +
     
    +
    setDeviation(double) - Method in interface mklab.JGNN.core.Distribution
    +
    +
    Sets the standard deviation of the distribution.
    +
    +
    setDeviation(double) - Method in class mklab.JGNN.core.distribution.Uniform
    +
     
    +
    setDiagonal(long, double) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets the matrix's specified diagonal elements to a given value.
    +
    +
    setDimensionName(String) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Sets a name for the tensor's one dimension.
    +
    +
    setDimensionName(String) - Method in class mklab.JGNN.nn.pooling.Sort
    +
     
    +
    setDimensionName(String, String) - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    Sets dimension names for one-hot encodings.
    +
    +
    setDimensionName(String, String) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets a name for the matrix's row and column dimensions.
    +
    +
    setDimensionName(String, String) - Method in class mklab.JGNN.nn.operations.Reshape
    +
     
    +
    setDimensionName(Tensor) - Method in class mklab.JGNN.core.Matrix
    +
     
    +
    setDimensionName(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Fills in dimension names per an example Tensor.isMatching(mklab.JGNN.core.Tensor) tensor.
    +
    +
    setEnabled(boolean) - Method in class mklab.JGNN.nn.operations.Dropout
    +
     
    +
    setEpochs(int) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Sets the maximum number of epochs for which training runs.
    +
    +
    setInterval(int) - Method in class mklab.JGNN.nn.loss.report.VerboseLoss
    +
    +
    Changes on which epochs the loss should be reported.
    +
    +
    setKey(K) - Method in class mklab.JGNN.core.util.FastEntry
    +
     
    +
    setLoss(Loss) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Set
    +
    +
    setMainDiagonal(double) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets the matrix's specified main diagonal elements to a given value value.
    +
    +
    setMean(double) - Method in class mklab.JGNN.core.distribution.Normal
    +
     
    +
    setMean(double) - Method in interface mklab.JGNN.core.Distribution
    +
    +
    Sets the mean of the distribution.
    +
    +
    setMean(double) - Method in class mklab.JGNN.core.distribution.Uniform
    +
     
    +
    setMeanReduction(boolean) - Method in class mklab.JGNN.nn.loss.CategoricalCrossEntropy
    +
    +
    Sets the reduction mechanism of categorical cross entropy.
    +
    +
    setNumBatches(int) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Sets the number of batches training data slices should be split into.
    +
    +
    setOptimizer(Optimizer) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Sets an Optimizer instance to controls parameter updates during training.
    +
    +
    setParallelizedStochasticGradientDescent(boolean) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Sets whether the training strategy should reflect stochastic + gradient descent by randomly sampling from the training dataset to obtain data samples.
    +
    +
    setPatience(int) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Sets the patience of the training strategy that performs early stopping.
    +
    +
    setRange(double, double) - Method in class mklab.JGNN.core.distribution.Uniform
    +
    +
    Sets the random of the uniform distribution.
    +
    +
    setRowName(String) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets a name for the matrix's row dimension.
    +
    +
    setSeed(long) - Method in class mklab.JGNN.core.distribution.Normal
    +
     
    +
    setSeed(long) - Method in interface mklab.JGNN.core.Distribution
    +
    +
    Sets the distribution's seed.
    +
    +
    setSeed(long) - Method in class mklab.JGNN.core.distribution.Uniform
    +
     
    +
    setStream(PrintStream) - Method in class mklab.JGNN.nn.loss.report.VerboseLoss
    +
    +
    Changes where the output is printed.
    +
    +
    setTo(Tensor) - Method in class mklab.JGNN.nn.inputs.Variable
    +
     
    +
    setToASymmetricNormalization() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets the Matrix to its asymmetrically normalized transformation + by appropriately adjusting its element values.
    +
    +
    setToNormalized() - Method in class mklab.JGNN.core.Tensor
    +
    +
    L2-normalizes the tensor's elements.
    +
    +
    setToOnes() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Set all tensor element values to 1.
    +
    +
    setToProbability() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Divides the tensor's elements with their sum.
    +
    +
    setToRandom() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Set tensor elements to random values from the uniform range [0,1]
    +
    +
    setToRandom(Distribution) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Set tensor elements to random values by sampling them from a given Distribution + instance.
    +
    +
    setToSymmetricNormalization() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Sets the Matrix to its symmetrically normalized transformation + by appropriately adjusting its element values.
    +
    +
    setToUniform() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Set all tensor element values to 1/Tensor.size()
    +
    +
    setToZero() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Set all tensor element values to 0.
    +
    +
    setValidationLoss(Loss) - Method in class mklab.JGNN.nn.ModelTraining
    +
     
    +
    setValue(V) - Method in class mklab.JGNN.core.util.FastEntry
    +
     
    +
    setVerbose(boolean) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Deprecated. +
    This method was available in earlier JGNN versions but will be gradually phased out. + Instead, wrap the validation loss within VerboseLoss to replicate + the same behavior.
    +
    +
    +
    setZeroCopyType(Matrix) - Method in class mklab.JGNN.core.matrix.WrapCols
    +
    +
    Sets a prototype matrix from which to borrow copying operations.
    +
    +
    setZeroCopyType(Matrix) - Method in class mklab.JGNN.core.matrix.WrapRows
    +
    +
    Sets a prototype matrix from which to borrow copying operations.
    +
    +
    shuffle() - Method in class mklab.JGNN.core.Slice
    +
    +
    Shuffles the slice.
    +
    +
    shuffle(int) - Method in class mklab.JGNN.core.Slice
    +
    +
    Shuffles the slice with a provided randomization seed.
    +
    +
    sigmoid(double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The sigmoid function 1/(1+exp(-x)).
    +
    +
    sigmoid(Tensor) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    Applies Loss.sigmoid(double) element-by-element.
    +
    +
    Sigmoid - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs a sigmoid transformation of its single input.
    +
    +
    Sigmoid() - Constructor for class mklab.JGNN.nn.activations.Sigmoid
    +
     
    +
    sigmoidDerivative(double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of the Loss.sigmoid(double) function.
    +
    +
    sigmoidDerivative(Tensor) - Static method in interface mklab.JGNN.core.util.Loss
    +
    + +
    +
    size() - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    The number of registered identifiers.
    +
    +
    size() - Method in class mklab.JGNN.core.Slice
    +
    +
    Retrieves the size of the slice.
    +
    +
    size() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    Slice - Class in mklab.JGNN.core
    +
    +
    This class provices an interface with which to define data slices, + for instance to sample labels.
    +
    +
    Slice(Iterable<Long>) - Constructor for class mklab.JGNN.core.Slice
    +
    +
    Instantiates the data slice from a collection of element identifiers.
    +
    +
    SoftMax - Class in mklab.JGNN.nn.pooling
    +
    +
    Implements a NNOperation that performs row-wise or column-wise + softmax on vector tensors or matrices.
    +
    +
    SoftMax() - Constructor for class mklab.JGNN.nn.pooling.SoftMax
    +
     
    +
    SoftMax(boolean) - Constructor for class mklab.JGNN.nn.pooling.SoftMax
    +
     
    +
    Sort - Class in mklab.JGNN.core.util
    +
     
    +
    Sort - Class in mklab.JGNN.nn.pooling
    +
     
    +
    Sort() - Constructor for class mklab.JGNN.core.util.Sort
    +
     
    +
    Sort(int) - Constructor for class mklab.JGNN.nn.pooling.Sort
    +
     
    +
    sortedIndexes(double[]) - Static method in class mklab.JGNN.core.util.Sort
    +
     
    +
    sortedIndexes(ArrayList<Double>) - Static method in class mklab.JGNN.core.util.Sort
    +
     
    +
    SparseMatrix - Class in mklab.JGNN.core.matrix
    +
    +
    A sparse Matrix that allocates memory only for non-zero elements.
    +
    +
    SparseMatrix(long, long) - Constructor for class mklab.JGNN.core.matrix.SparseMatrix
    +
    +
    Generates a sparse matrix with the designated number of rows and columns.
    +
    +
    SparseSymmetric - Class in mklab.JGNN.core.matrix
    +
    +
    Deprecated. +
    Under development.
    +
    +
    +
    SparseSymmetric(long, long) - Constructor for class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    Generates a symmetric matrix with the designated number of rows and columns.
    +
    +
    SparseTensor - Class in mklab.JGNN.core.tensor
    +
    +
    This class provides a sparse Tensor with many zero elements.
    +
    +
    SparseTensor() - Constructor for class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    SparseTensor(long) - Constructor for class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    SPECIES - Static variable in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    sqrt() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    sqrt() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the square root of tensor elements.
    +
    +
    startTape() - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    submit(Runnable) - Method in class mklab.JGNN.core.ThreadPool
    +
    +
    Submits a runnable to be executed at some future point by a thread, + for example via ThreadPool.getInstance().submit(new Runnable(){public void run(){...}});.
    +
    +
    subtract(Tensor) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    subtract(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    subtract(Tensor) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    sum() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    Sum - Class in mklab.JGNN.nn.pooling
    +
    +
    Implements a NNOperation that performs row-wise or column-wise + sum reduction on vector tensors or matrices.
    +
    +
    Sum() - Constructor for class mklab.JGNN.nn.pooling.Sum
    +
     
    +
    Sum(boolean) - Constructor for class mklab.JGNN.nn.pooling.Sum
    +
     
    +
    symmetricNormalization() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a copy of the Matrix that holds its symmetrically normalized version.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-18.html b/docs/javadoc/index-files/index-18.html new file mode 100644 index 00000000..00ded7d4 --- /dev/null +++ b/docs/javadoc/index-files/index-18.html @@ -0,0 +1,227 @@ + + + + +T-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    T

    +
    +
    tanh(double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The tanh activation (exp(x)-exp(-x))/(exp(x)+exp(-x))
    +
    +
    tanh(Tensor) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    Applies Loss.tanh(double) element-by-element.
    +
    +
    Tanh - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs a tanh transformation of its single input.
    +
    +
    Tanh() - Constructor for class mklab.JGNN.nn.activations.Tanh
    +
     
    +
    tanhDerivative(double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of the Loss.tanh(double) function.
    +
    +
    tanhDerivative(Tensor) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    Applies Loss.tanhDerivative(double) function.
    +
    +
    tensor - Variable in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    Tensor - Class in mklab.JGNN.core
    +
    +
    This class provides a native java implementation of Tensor functionalities.
    +
    +
    Tensor(long) - Constructor for class mklab.JGNN.core.Tensor
    +
    +
    Construct that creates a tensor of zeros given its number of elements
    +
    +
    ThreadPool - Class in mklab.JGNN.core
    +
    +
    This class provides thread execution pool utilities while keeping track of thread + identifiers for use by thread-specific NNOperation.
    +
    +
    To - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that lists the second element of the 2D matrix element iterator.
    +
    +
    To() - Constructor for class mklab.JGNN.nn.operations.To
    +
     
    +
    toArray() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Retrieves a representation of the Tensor as an array of doubles.
    +
    +
    toDense() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a copy of the matrix organized as a dense matrix.
    +
    +
    toDouble() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Converts a tensor of Tensor.size()==1 to double.
    +
    +
    toNonZeroString() - Method in class mklab.JGNN.core.Matrix
    +
     
    +
    toProbability() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    toSparse() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a copy of the matrix organized as a sparse matrix.
    +
    +
    toString() - Method in class mklab.JGNN.core.Matrix
    +
     
    +
    toString() - Method in class mklab.JGNN.core.Tensor
    +
    +
    A string serialization of the tensor that can be used by the constructor DenseTensor(String) to create an identical copy.
    +
    +
    toString() - Method in class mklab.JGNN.core.util.FastEntry
    +
     
    +
    train(Tensor[], Tensor) - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    train(Loss, Optimizer, List<Tensor>, List<Tensor>) - Method in class mklab.JGNN.nn.Model
    +
    +
    Performs the training of #train(Optimizer, List, List, List) for unit weights.
    +
    +
    train(Loss, Optimizer, List<Tensor>, List<Tensor>, List<Tensor>) - Method in class mklab.JGNN.nn.Model
    +
    +
    Performs one parameter adjustment step (e.g.
    +
    +
    train(Model, Matrix, Matrix, Slice, Slice) - Method in class mklab.JGNN.nn.ModelTraining
    +
    +
    Trains a Model instance based on current settings.
    +
    +
    train(ModelTraining, Matrix, Matrix, Slice, Slice) - Method in class mklab.JGNN.nn.Model
    +
    +
    Trains the model by appropriately calling + ModelTraining.train(Model, Matrix, Matrix, Slice, Slice) + with the provided parameters.
    +
    +
    trainOnOutputError(Tensor[], Tensor) - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    trainTowardsZero(Optimizer, List<Tensor>) - Method in class mklab.JGNN.nn.Model
    +
    +
    Is equivalent to calling Model.train(Loss, Optimizer, List, List) + for new Zero() loss.
    +
    +
    transform(Tensor) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Performs the linear algebra transformation A*x where A is this matrix and x a vector
    +
    +
    Transpose - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that performs matrix transposition.
    +
    +
    Transpose() - Constructor for class mklab.JGNN.nn.operations.Transpose
    +
     
    +
    transposed() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a transposed copy of the matrix.
    +
    +
    TransposedMatrix - Class in mklab.JGNN.core.matrix
    +
    +
    Generates a transposed version of a base matrix, with which it shares elements.
    +
    +
    TransposedMatrix(Matrix) - Constructor for class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.tensor.AccessSubtensor
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Retrieves positions within the tensor that may hold non-zero elements.
    +
    +
    traverseNonZeroElements() - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-19.html b/docs/javadoc/index-files/index-19.html new file mode 100644 index 00000000..0a491fef --- /dev/null +++ b/docs/javadoc/index-files/index-19.html @@ -0,0 +1,96 @@ + + + + +U-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    U

    +
    +
    Uniform - Class in mklab.JGNN.core.distribution
    +
    +
    Implements a Uniform Distribution of given bounds.
    +
    +
    Uniform() - Constructor for class mklab.JGNN.core.distribution.Uniform
    +
    +
    Instantiates a uniform distribution that samples values from the range [0,1].
    +
    +
    Uniform(double, double) - Constructor for class mklab.JGNN.core.distribution.Uniform
    +
    +
    Instantiates a uniform distribution that samples values from the given range [from, to].
    +
    +
    unregister(double[]) - Method in class mklab.JGNN.core.Memory.Scope
    +
     
    +
    update(Tensor, Tensor) - Method in interface mklab.JGNN.nn.Optimizer
    +
    +
    In-place updates the value of a tensor given its gradient.
    +
    +
    update(Tensor, Tensor) - Method in class mklab.JGNN.nn.optimizers.Adam
    +
     
    +
    update(Tensor, Tensor) - Method in class mklab.JGNN.nn.optimizers.BatchOptimizer
    +
     
    +
    update(Tensor, Tensor) - Method in class mklab.JGNN.nn.optimizers.GradientDescent
    +
     
    +
    update(Tensor, Tensor) - Method in class mklab.JGNN.nn.optimizers.Regularization
    +
     
    +
    updateAll() - Method in class mklab.JGNN.nn.optimizers.BatchOptimizer
    +
    +
    Updates all tracked variables with pending batch calculations using the + wrapped optimizer.
    +
    +
    updateTape(Tensor, LSTM.LSTMState, Tensor) - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-2.html b/docs/javadoc/index-files/index-2.html new file mode 100644 index 00000000..371c047b --- /dev/null +++ b/docs/javadoc/index-files/index-2.html @@ -0,0 +1,93 @@ + + + + +B-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    B

    +
    +
    BatchOptimizer - Class in mklab.JGNN.nn.optimizers
    +
    +
    Wraps an Optimizer by accumulating derivatives and calling + Optimizer.update(Tensor, Tensor) with the average derivative + after a fixed number of accumulations.
    +
    +
    BatchOptimizer(Optimizer) - Constructor for class mklab.JGNN.nn.optimizers.BatchOptimizer
    +
    +
    Initializes a BatchOptimizer that accumulates derivatives and updates them + only when BatchOptimizer.updateAll() is called.
    +
    +
    BatchOptimizer(Optimizer, long) - Constructor for class mklab.JGNN.nn.optimizers.BatchOptimizer
    +
    +
    Initializes a BatchOptimizer that accumulates derivatives and updates them + with BatchOptimizer.updateAll() after every fixed number of updates.
    +
    +
    BinaryCrossEntropy - Class in mklab.JGNN.nn.loss
    +
    +
    Implements a binary cross-entropy Loss.
    + For more than one output dimensions use CategoricalCrossEntropy
    +
    +
    BinaryCrossEntropy() - Constructor for class mklab.JGNN.nn.loss.BinaryCrossEntropy
    +
    +
    Initializes binary cross entropy with 1.E-12 epsilon value.
    +
    +
    BinaryCrossEntropy(double) - Constructor for class mklab.JGNN.nn.loss.BinaryCrossEntropy
    +
    +
    Initializes binary cross entropy with and epsilon value + to bound its outputs in the range [log(epsilon), -log(epsilon)] instead of (-inf, inf).
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-20.html b/docs/javadoc/index-files/index-20.html new file mode 100644 index 00000000..70274384 --- /dev/null +++ b/docs/javadoc/index-files/index-20.html @@ -0,0 +1,135 @@ + + + + +V-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    V

    +
    +
    values - Variable in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    values - Variable in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    var(String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares a component with the given name to be used as an input + of the managed model.
    +
    +
    var(String) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    var(String) - Method in class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    Variable - Class in mklab.JGNN.nn.inputs
    +
    +
    Implements a NNOperation that represents Model inputs.
    +
    +
    Variable() - Constructor for class mklab.JGNN.nn.inputs.Variable
    +
     
    +
    VariancePreservingInitializer - Class in mklab.JGNN.nn.initializers
    +
    +
    This class describes a broad class of Initializer strategies, in which + dense neural layer initialization is controlled so that variance is mostly preserved from + inputs to outputs to avoid vanishing or exploding gradients in the first training + runs.
    +
    +
    VariancePreservingInitializer() - Constructor for class mklab.JGNN.nn.initializers.VariancePreservingInitializer
    +
     
    +
    vectorization - Static variable in class mklab.JGNN.core.Tensor
    +
     
    +
    VectorizedMatrix - Class in mklab.JGNN.core.matrix
    +
    +
    Implements a dense Matrix where all elements are stored in memory.
    +
    +
    VectorizedMatrix(long, long) - Constructor for class mklab.JGNN.core.matrix.VectorizedMatrix
    +
    +
    Generates a dense matrix with the designated number of rows and columns.
    +
    +
    VectorizedTensor - Class in mklab.JGNN.core.tensor
    +
    +
    This class provides a dense Tensor that wraps an array of doubles.
    +
    +
    VectorizedTensor() - Constructor for class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    VectorizedTensor(double...) - Constructor for class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    VectorizedTensor(long) - Constructor for class mklab.JGNN.core.tensor.VectorizedTensor
    +
    +
    Constructs a dense tensor holding zero values.
    +
    +
    VectorizedTensor(String) - Constructor for class mklab.JGNN.core.tensor.VectorizedTensor
    +
    +
    Reconstructs a serialized Tensor (i.e.
    +
    +
    VectorizedTensor(Iterator<? extends Number>) - Constructor for class mklab.JGNN.core.tensor.VectorizedTensor
    +
    +
    Constructs a dense tensor from an iterator holding that outputs its values.
    +
    +
    VerboseLoss - Class in mklab.JGNN.nn.loss.report
    +
    +
    Implements a Loss that wraps other losses and outputs their value during training to an output stream + (to System.out by default).
    +
    +
    VerboseLoss(Loss) - Constructor for class mklab.JGNN.nn.loss.report.VerboseLoss
    +
    +
    Instantiates a VerboseLoss given a base loss to be wrapped.
    +
    +
    view() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Retrieves a string that views internal data being processed by the current + thread, including gradients.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-21.html b/docs/javadoc/index-files/index-21.html new file mode 100644 index 00000000..157ef613 --- /dev/null +++ b/docs/javadoc/index-files/index-21.html @@ -0,0 +1,83 @@ + + + + +W-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    W

    +
    +
    waitForConclusion() - Method in class mklab.JGNN.core.ThreadPool
    +
    +
    Waits until all threads in the pool have finished.
    +
    +
    WrapCols - Class in mklab.JGNN.core.matrix
    +
    +
    Wraps a list of tensors into a matrix with the tensors as columns.
    +
    +
    WrapCols(List<Tensor>) - Constructor for class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    WrapCols(Tensor...) - Constructor for class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    WrapRows - Class in mklab.JGNN.core.matrix
    +
    +
    Wraps a list of tensors into a matrix with the tensors as rows.
    +
    +
    WrapRows(List<Tensor>) - Constructor for class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    WrapRows(Tensor...) - Constructor for class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-22.html b/docs/javadoc/index-files/index-22.html new file mode 100644 index 00000000..2dbc7361 --- /dev/null +++ b/docs/javadoc/index-files/index-22.html @@ -0,0 +1,75 @@ + + + + +X-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    X

    +
    +
    XavierNormal - Class in mklab.JGNN.nn.initializers
    +
    + +
    +
    XavierNormal() - Constructor for class mklab.JGNN.nn.initializers.XavierNormal
    +
     
    +
    XavierUniform - Class in mklab.JGNN.nn.initializers
    +
    + +
    +
    XavierUniform() - Constructor for class mklab.JGNN.nn.initializers.XavierUniform
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-23.html b/docs/javadoc/index-files/index-23.html new file mode 100644 index 00000000..4daefc91 --- /dev/null +++ b/docs/javadoc/index-files/index-23.html @@ -0,0 +1,136 @@ + + + + +Z-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    Z

    +
    +
    zeroCopy() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a Matrix with the same class and dimensions and all element set to zero.
    +
    +
    zeroCopy() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Creates a tensor of the same class with the same size and all element set to zero.
    +
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a Matrix with the same class and dimensions and all element set to zero.
    +
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.tensor.AccessSubtensor
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    zeroCopy(long) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Creates a tensor of the same class with a given size and all element set to zero.
    +
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    zeroCopy(long, long) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
    +
    +
    zeroCopy(Tensor) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a tensor of the same class and all elements set to zero, + but size and dimension names are obtained from a prototype tensor.
    +
    +
    zeroCopy(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Creates a tensor of the same class and all elements set to zero, + but size and dimension names are obtained from a prototype tensor.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-3.html b/docs/javadoc/index-files/index-3.html new file mode 100644 index 00000000..84a5bc9f --- /dev/null +++ b/docs/javadoc/index-files/index-3.html @@ -0,0 +1,216 @@ + + + + +C-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    C

    +
    +
    cast(Class<Type>) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs the equivalent of Java's typecasting that fits + in functional interfaces.
    +
    +
    CategoricalCrossEntropy - Class in mklab.JGNN.nn.loss
    +
    +
    Implements a categorical cross-entropy Loss.
    + For binary classification of one output use BinaryCrossEntropy.
    +
    +
    CategoricalCrossEntropy() - Constructor for class mklab.JGNN.nn.loss.CategoricalCrossEntropy
    +
    +
    Initializes categorical cross entropy with 1.E-12 epsilon value.
    +
    +
    CategoricalCrossEntropy(double) - Constructor for class mklab.JGNN.nn.loss.CategoricalCrossEntropy
    +
    +
    Initializes categorical cross entropy with and epsilon value + to bound its outputs in the range [log(epsilon), -log(epsilon)] instead of (-inf, inf).
    +
    +
    Citeseer - Class in mklab.JGNN.adhoc.datasets
    +
    +
    Downloads and constructs the Citeseer node classification Dataset.
    +
    +
    Citeseer() - Constructor for class mklab.JGNN.adhoc.datasets.Citeseer
    +
     
    +
    classes() - Method in class mklab.JGNN.adhoc.Dataset
    +
    +
    Retrieves a converter that maps class names to label dimentions.
    +
    +
    classify() - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Adds a classification layer that gather the number of inputs nodes + and applies softmax on all of them.
    +
    +
    clearPrediction() - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    ColumnRepetition - Class in mklab.JGNN.core.matrix
    +
    +
    Defines a matrix whose columns are all a copy of a Tensor.
    +
    +
    ColumnRepetition(long, Tensor) - Constructor for class mklab.JGNN.core.matrix.ColumnRepetition
    +
    +
    Instantiates a matrix repeating a tensor to be treated as a column.
    +
    +
    Complement - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that performs the operation 1-x for its simple input x.
    +
    +
    Complement() - Constructor for class mklab.JGNN.nn.operations.Complement
    +
     
    +
    concat(int) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Concatenates horizontally the output of a number of given layers, + starting from the last one and going backwards.
    +
    +
    concat(int) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Concatenates horizontally the output of a number of given layers, + starting from the last one and going backwards.
    +
    +
    Concat - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that concatenates its two matrix inputs.
    +
    +
    Concat() - Constructor for class mklab.JGNN.nn.operations.Concat
    +
     
    +
    config(String, double) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares a configuration hyperparameter, which can be used to declare + matrix and vector parameters during ModelBuilder.operation(String) expressions.
    +
    +
    config(String, double) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    config(String, double) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    config(String, double) - Method in class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    config(String, String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    config(String, String) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    configFrom(ModelBuilder) - Method in class mklab.JGNN.nn.ModelTraining
    +
     
    +
    constant(String, double) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares a non-learnable constant component with the given name.
    +
    +
    constant(String, double) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    constant(String, double) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    constant(String, double) - Method in class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    constant(String, Tensor) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Declares a non-learnable constant component with the given name.
    +
    +
    constant(String, Tensor) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    constant(String, Tensor) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
     
    +
    constant(String, Tensor) - Method in class mklab.JGNN.adhoc.parsers.Neuralang
    +
     
    +
    Constant - Class in mklab.JGNN.nn.inputs
    +
    +
    Implements a NNOperation that holds a constant tensor.
    +
    +
    Constant(Tensor) - Constructor for class mklab.JGNN.nn.inputs.Constant
    +
    +
    Creates a constant holding a tensor.
    +
    +
    contains(Object) - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    Checks whether the object has been registered with IdConverter.getOrCreateId(Object).
    +
    +
    copy() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Creates a Tensor.zeroCopy() and transfers to it all potentially non-zero element values.
    +
    +
    Cora - Class in mklab.JGNN.adhoc.datasets
    +
    +
    Downloads and constructs the Cora node classification Dataset.
    +
    +
    Cora() - Constructor for class mklab.JGNN.adhoc.datasets.Cora
    +
     
    +
    createFirstState() - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    createForwardValidity(List<Tensor>) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Asserts that all components parsed into a call graph with + ModelBuilder.operation(String) are eventually used by at least one ModelBuilder.out(String) + component.
    +
    +
    crossEntropy(double, double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    A cross entropy loss for one sample computes as -label*log(output) -(1-label)*log(1-output).
    +
    +
    crossEntropyDerivative(double, double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of the Loss.crossEntropy(double, double) loss.
    +
    +
    crossEntropyDerivativeCategorical(double, double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of the #crossEntropyCategorical(double, double) loss.
    +
    +
    crossEntropySigmoidDerivative(double, double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of crossEntropy(sigmoid(x), label) with respect to x.
    +
    +
    crossEntropyTanhDerivative(double, double) - Static method in interface mklab.JGNN.core.util.Loss
    +
    +
    The derivative of crossEntropy(tanh(x), label) with respect to x.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-4.html b/docs/javadoc/index-files/index-4.html new file mode 100644 index 00000000..ab6d3f5d --- /dev/null +++ b/docs/javadoc/index-files/index-4.html @@ -0,0 +1,166 @@ + + + + +D-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    D

    +
    +
    Dataset - Class in mklab.JGNN.adhoc
    +
    +
    This class provides the backbone with which to define datasets.
    +
    +
    Dataset() - Constructor for class mklab.JGNN.adhoc.Dataset
    +
     
    +
    debugging - Variable in class mklab.JGNN.nn.NNOperation
    +
     
    +
    DenseMatrix - Class in mklab.JGNN.core.matrix
    +
    +
    Implements a dense Matrix where all elements are stored in memory.
    +
    +
    DenseMatrix(long, long) - Constructor for class mklab.JGNN.core.matrix.DenseMatrix
    +
    +
    Generates a dense matrix with the designated number of rows and columns.
    +
    +
    DenseTensor - Class in mklab.JGNN.core.tensor
    +
    +
    This class provides a dense Tensor that wraps an array of doubles.
    +
    +
    DenseTensor() - Constructor for class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    DenseTensor(double...) - Constructor for class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    DenseTensor(long) - Constructor for class mklab.JGNN.core.tensor.DenseTensor
    +
    +
    Constructs a dense tensor holding zero values.
    +
    +
    DenseTensor(String) - Constructor for class mklab.JGNN.core.tensor.DenseTensor
    +
    +
    Reconstructs a serialized Tensor (i.e.
    +
    +
    DenseTensor(Iterator<? extends Number>) - Constructor for class mklab.JGNN.core.tensor.DenseTensor
    +
    +
    Constructs a dense tensor from an iterator holding + that outputs its values.
    +
    +
    density() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Provides the memory allocation density of Tensor.getNonZeroElements() + compare to the size of the tensor.
    +
    +
    derivative(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.Accuracy
    +
     
    +
    derivative(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.BinaryCrossEntropy
    +
     
    +
    derivative(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.CategoricalCrossEntropy
    +
     
    +
    derivative(Tensor, Tensor) - Method in class mklab.JGNN.nn.Loss
    +
    +
    Provides the derivative of a loss function at its evaluation point.
    +
    +
    derivative(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.report.VerboseLoss
    +
     
    +
    describe() - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Creates a description of the builded model's internal execution graph.
    +
    +
    describe() - Method in class mklab.JGNN.core.Matrix
    +
     
    +
    describe() - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    describe() - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    describe() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    describe() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Describes the type, size and other characteristics of the tensor.
    +
    +
    describe() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Retrieves an concise description of the operation that shows metadata and + potential data descriptions processed by the current thread.
    +
    +
    Diagonal - Class in mklab.JGNN.core.matrix
    +
    +
    Implements a square matrix whose diagonal elements are determined by the correspond values of + an underlying tensor and off-diagonal elements are zero.
    +
    +
    Distribution - Interface in mklab.JGNN.core
    +
    +
    This interface abstracts a probability distribution + that can be passed to Tensor.setToRandom(Distribution) + for random tensor initialization.
    +
    +
    dot(Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs the dot product between this and another tensor.
    +
    +
    dot(Tensor, Tensor) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Performs the triple dot product between this and two other tensors.
    +
    +
    Dropout - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that converts its first argument to a ColumnRepetition matrix + with a number of columns equal to the second argument.
    +
    +
    Dropout() - Constructor for class mklab.JGNN.nn.operations.Dropout
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-5.html b/docs/javadoc/index-files/index-5.html new file mode 100644 index 00000000..7464c91f --- /dev/null +++ b/docs/javadoc/index-files/index-5.html @@ -0,0 +1,134 @@ + + + + +E-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    E

    +
    +
    EmptyMatrix - Class in mklab.JGNN.core.empy
    +
     
    +
    EmptyMatrix(long, long) - Constructor for class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    EmptyTensor - Class in mklab.JGNN.core.empy
    +
     
    +
    EmptyTensor() - Constructor for class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    EmptyTensor(long) - Constructor for class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    endTape() - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    enter() - Method in class mklab.JGNN.core.Memory.Scope
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors.
    +
    +
    estimateNumNonZeroElements() - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    evaluate(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.Accuracy
    +
     
    +
    evaluate(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.BinaryCrossEntropy
    +
     
    +
    evaluate(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.CategoricalCrossEntropy
    +
     
    +
    evaluate(Tensor, Tensor) - Method in class mklab.JGNN.nn.Loss
    +
    +
    Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
    +
    +
    evaluate(Tensor, Tensor) - Method in class mklab.JGNN.nn.loss.report.VerboseLoss
    +
     
    +
    exit() - Method in class mklab.JGNN.core.Memory.Scope
    +
     
    +
    Exp - Class in mklab.JGNN.nn.activations
    +
    +
    Implements a NNOperation that performs an exponential transformation of its single input.
    +
    +
    Exp() - Constructor for class mklab.JGNN.nn.activations.Exp
    +
     
    +
    expMinusOne() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    expMinusOne() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Computes the exponential minus 1 of tensor elements.
    +
    +
    external(Tensor, Tensor) - Static method in class mklab.JGNN.core.Matrix
    +
    +
    Produces the external product of two tensors.
    +
    +
    eye(long) - Static method in class mklab.JGNN.core.Matrix
    +
    +
    Creates a sparse unit matrix.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-6.html b/docs/javadoc/index-files/index-6.html new file mode 100644 index 00000000..0c33255d --- /dev/null +++ b/docs/javadoc/index-files/index-6.html @@ -0,0 +1,143 @@ + + + + +F-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    F

    +
    +
    FastBuilder - Class in mklab.JGNN.adhoc.parsers
    +
    +
    Extends the capabilities of LayeredBuilder to use + for node classification.
    +
    +
    FastBuilder() - Constructor for class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Deprecated. +
    This constructor should only be used by loading.
    +
    +
    +
    FastBuilder(Matrix, Matrix) - Constructor for class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Creates a graph neural network builder from an + normalized adjacency matrix and a node feature matrix.
    +
    +
    FastEntry<K,V> - Class in mklab.JGNN.core.util
    +
     
    +
    FastEntry() - Constructor for class mklab.JGNN.core.util.FastEntry
    +
     
    +
    FastEntry(K, V) - Constructor for class mklab.JGNN.core.util.FastEntry
    +
     
    +
    features() - Method in class mklab.JGNN.adhoc.Dataset
    +
    +
    Retrieves the dataset's feature matrix.
    +
    +
    forward(List<Tensor>) - Method in class mklab.JGNN.nn.activations.L1
    +
     
    +
    forward(List<Tensor>) - Method in class mklab.JGNN.nn.operations.Attention
    +
     
    +
    forward(List<Tensor>) - Method in class mklab.JGNN.nn.pooling.Mean
    +
     
    +
    forward(List<Tensor>) - Method in class mklab.JGNN.nn.pooling.Sum
    +
     
    +
    From - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that lists the first element of the 2D matrix element iterator.
    +
    +
    From() - Constructor for class mklab.JGNN.nn.operations.From
    +
     
    +
    fromDouble(double) - Static method in class mklab.JGNN.core.Matrix
    +
    +
    Converts a given value to a JGNN-compatible 1x1 matrix.
    +
    +
    fromDouble(double) - Static method in class mklab.JGNN.core.Tensor
    +
    +
    Converts a given value to a JGNN-compatible dense tensor.
    +
    +
    fromRange(long) - Static method in class mklab.JGNN.core.Tensor
    +
    +
    Creates a dense tensor holding the desired range [0, 1, ..., end-1].
    +
    +
    fromRange(long, long) - Static method in class mklab.JGNN.core.Tensor
    +
    +
    Creates a dense tensor holding the desired range [start, start+1, ..., end-1].
    +
    +
    function(String, String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    function(String, String) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
     
    +
    futureConfigs(String, Function<Integer, Double>, int) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Defines a number of FastBuilder.config(String, double) symbols involving a {l} + notation, for example so that they can be used during FastBuilder.layerRepeat(String, int).
    +
    +
    futureConfigs(String, Function<Integer, Double>, int) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Defines a number of LayeredBuilder.config(String, double) symbols involving a {l} + notation, for example so that they can be used during LayeredBuilder.layerRepeat(String, int).
    +
    +
    futureConstants(String, Function<Integer, Double>, int) - Method in class mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    Defines a number of FastBuilder.constant(String, double) symbols involving a {l} + notation, for example so that they can be used during FastBuilder.layerRepeat(String, int).
    +
    +
    futureConstants(String, Function<Integer, Double>, int) - Method in class mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    Defines a number of LayeredBuilder.constant(String, double) symbols involving a {l} + notation, for example so that they can be used during LayeredBuilder.layerRepeat(String, int).
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-7.html b/docs/javadoc/index-files/index-7.html new file mode 100644 index 00000000..777b96ac --- /dev/null +++ b/docs/javadoc/index-files/index-7.html @@ -0,0 +1,329 @@ + + + + +G-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    G

    +
    +
    Gather - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that performs the equivalent of TensorFlow's gather operation.
    +
    +
    Gather() - Constructor for class mklab.JGNN.nn.operations.Gather
    +
     
    +
    get() - Method in class mklab.JGNN.nn.inputs.Parameter
    +
    +
    Gets sets the parameter's value tensor
    +
    +
    get(int) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    get(int) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    get(long) - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    Retrieves the object corresponding to a given identifier.
    +
    +
    get(long) - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.empy.EmptyTensor
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.AccessCol
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.AccessRow
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    get(long) - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.tensor.AccessSubtensor
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.Tensor
    +
    +
    Retrieves the value of a tensor element at a given position.
    +
    +
    get(long) - Method in class mklab.JGNN.core.tensor.RepeatTensor
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.tensor.SparseTensor
    +
     
    +
    get(long) - Method in class mklab.JGNN.core.tensor.VectorizedTensor
    +
     
    +
    get(long, long) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves the value stored at a matrix element.
    +
    +
    get(String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Retrieves the NNOperation registered with the provided + name, for example to investigates its value.
    +
    +
    getColName() - Method in class mklab.JGNN.core.Matrix
    +
     
    +
    getCols() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves the number of columns of a matrix.
    +
    +
    getColumn() - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
    +
    Retrieves the wrapped column tensor.
    +
    +
    getConfigOrDefault(String, double) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    getConfigOrDefault(String, int) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
     
    +
    getCurrentThreadId() - Static method in class mklab.JGNN.core.ThreadPool
    +
    +
    Retrieves a unique integer indicating the currently running thread.
    +
    +
    getDepthLastOperations() - Method in class mklab.JGNN.nn.Model
    +
    +
    Retrieves a list of operations by traversing the model's execution + graph with the depth-first algorithm in the inverse edge + order (starting from the outputs).
    +
    +
    getDescription() - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    getDeviation() - Method in interface mklab.JGNN.core.Distribution
    +
    +
    Retrieves the distribution's standard deviation.
    +
    +
    getDeviation() - Method in class mklab.JGNN.core.distribution.Normal
    +
     
    +
    getDeviation() - Method in class mklab.JGNN.core.distribution.Uniform
    +
     
    +
    getDimensionName() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    getDimensionSize(String) - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves the value of the dimension with the given name.
    +
    +
    getExecutionGraphDot() - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Exports the builded model's execution graph into a .dot format + representation.
    +
    +
    getId(Object) - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    Retrieves an identifier.
    +
    +
    getInputs() - Method in class mklab.JGNN.nn.Model
    +
    +
    Retrieves a list of model inputs.
    +
    +
    getInputs() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Retrieves a list of input operations within a model's execution graph.
    +
    +
    getInstance() - Static method in class mklab.JGNN.core.ThreadPool
    +
    +
    Retrieves the singleton ThreadPool instance used by JGNN.
    +
    +
    getKey() - Method in class mklab.JGNN.core.util.FastEntry
    +
     
    +
    getLastTapeError() - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    getMean() - Method in interface mklab.JGNN.core.Distribution
    +
    +
    Retrieves the distribution's mean.
    +
    +
    getMean() - Method in class mklab.JGNN.core.distribution.Normal
    +
     
    +
    getMean() - Method in class mklab.JGNN.core.distribution.Uniform
    +
     
    +
    getMemory() - Method in class mklab.JGNN.nn.operations.LSTM.LSTMState
    +
     
    +
    getModel() - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Retrieves the model currently built by the builder.
    +
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.Exp
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.LRelu
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.NExp
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.PRelu
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.Relu
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.Sigmoid
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.activations.Tanh
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer.
    +
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.operations.MatMul
    +
     
    +
    getNonLinearity(int, double, double) - Method in class mklab.JGNN.nn.operations.Multiply
    +
     
    +
    getNonZeroElements() - Method in class mklab.JGNN.core.Tensor
    +
    +
    Retrieves an iterable that wraps Tensor.traverseNonZeroElements().
    +
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.empy.EmptyMatrix
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.ColumnRepetition
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.DenseMatrix
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.Diagonal
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
    +
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.RepeatMatrix
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.RowRepetition
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.SparseMatrix
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    Deprecated.
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.TransposedMatrix
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.VectorizedMatrix
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.WrapCols
    +
     
    +
    getNonZeroEntries() - Method in class mklab.JGNN.core.matrix.WrapRows
    +
     
    +
    getOptimizer() - Method in class mklab.JGNN.nn.operations.LSTM
    +
     
    +
    getOrCreateId(Object) - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    Retrieves an identifier for a given object, creating one if none exists.
    +
    +
    getOutput() - Method in class mklab.JGNN.nn.operations.LSTM.LSTMState
    +
     
    +
    getOutputs() - Method in class mklab.JGNN.nn.Model
    +
    +
    Retrieves a list of model outputs.
    +
    +
    getOutputs() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Retrieves a list of output operations within a model's execution graph.
    +
    +
    getParameters() - Method in class mklab.JGNN.nn.Model
    +
    +
    Retrieves a list of all parameters eventually leading to the model's outputs.
    +
    +
    getPrediction() - Method in class mklab.JGNN.nn.NNOperation
    +
     
    +
    getRowName() - Method in class mklab.JGNN.core.Matrix
    +
     
    +
    getRows() - Method in class mklab.JGNN.core.Matrix
    +
    +
    Retrieves the number of rows of a matrix.
    +
    +
    getSimpleDescription() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Provides a simple description to show when drawing .dot format diagrams.
    +
    +
    getSimpleDescription() - Method in class mklab.JGNN.nn.operations.Reshape
    +
     
    +
    getSlice() - Method in class mklab.JGNN.adhoc.IdConverter
    +
    +
    Returns a slice of all registered identifiers.
    +
    +
    getValue() - Method in class mklab.JGNN.core.util.FastEntry
    +
     
    +
    GradientDescent - Class in mklab.JGNN.nn.optimizers
    +
    +
    Implements a gradient descent Optimizer.
    +
    +
    GradientDescent(double) - Constructor for class mklab.JGNN.nn.optimizers.GradientDescent
    +
    +
    Initializes a GradientDescent optimizer with fixed learning rate.
    +
    +
    GradientDescent(double, double) - Constructor for class mklab.JGNN.nn.optimizers.GradientDescent
    +
    +
    Initializes a GradientDescent optimizer with degrading learning rate.
    +
    +
    graph() - Method in class mklab.JGNN.adhoc.Dataset
    +
    +
    Retrieves the dataset's graph.
    +
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-8.html b/docs/javadoc/index-files/index-8.html new file mode 100644 index 00000000..632d7df1 --- /dev/null +++ b/docs/javadoc/index-files/index-8.html @@ -0,0 +1,72 @@ + + + + +H-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    H

    +
    +
    hasComponent(String) - Method in class mklab.JGNN.adhoc.ModelBuilder
    +
    +
    Checks whether the builder has added to its managed model a component of + the given name.
    +
    +
    hasNext() - Method in class mklab.JGNN.core.util.Range
    +
     
    +
    hasNext() - Method in class mklab.JGNN.core.util.Range2D
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index-files/index-9.html b/docs/javadoc/index-files/index-9.html new file mode 100644 index 00000000..97309541 --- /dev/null +++ b/docs/javadoc/index-files/index-9.html @@ -0,0 +1,137 @@ + + + + +I-Index + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Index

    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages +

    I

    +
    +
    IdConverter - Class in mklab.JGNN.adhoc
    +
    +
    Converts back-and-forth between objects and unique ids.
    +
    +
    IdConverter() - Constructor for class mklab.JGNN.adhoc.IdConverter
    +
    +
    Instantiates an empty converter to be filled with IdConverter.getOrCreateId(Object).
    +
    +
    IdConverter(List<?>) - Constructor for class mklab.JGNN.adhoc.IdConverter
    +
    +
    Instantiates the converter on a list of objects to register + with IdConverter.getOrCreateId(Object) on.
    +
    +
    Identity - Class in mklab.JGNN.nn.operations
    +
    +
    Implements a NNOperation that just transfers its single input.
    +
    +
    Identity() - Constructor for class mklab.JGNN.nn.operations.Identity
    +
     
    +
    init(Initializer) - Method in class mklab.JGNN.nn.Model
    +
    +
    Apply the provided initializer on the model to set first values to its + parameters.
    +
    +
    Initializer - Class in mklab.JGNN.nn
    +
    +
    This class defines an abstract interface for applying initializers to models.
    +
    +
    Initializer() - Constructor for class mklab.JGNN.nn.Initializer
    +
     
    +
    inverse() - Method in class mklab.JGNN.core.tensor.DenseTensor
    +
     
    +
    inverse() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.inputs.Constant
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.inputs.Parameter
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.inputs.Variable
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Checks whether the operation's output should be cached given that it is a + constant.
    +
    +
    isCachable() - Method in class mklab.JGNN.nn.operations.Dropout
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.operations.From
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.operations.Reshape
    +
     
    +
    isCachable() - Method in class mklab.JGNN.nn.operations.To
    +
     
    +
    isConstant() - Method in class mklab.JGNN.nn.inputs.Constant
    +
     
    +
    isConstant() - Method in class mklab.JGNN.nn.inputs.Parameter
    +
     
    +
    isConstant() - Method in class mklab.JGNN.nn.inputs.Variable
    +
     
    +
    isConstant() - Method in class mklab.JGNN.nn.NNOperation
    +
    +
    Checks whether the operation yields a constant output, so that propagation + does not try to compute partial derivatives for it.
    +
    +
    isEnabled() - Method in class mklab.JGNN.nn.operations.Dropout
    +
     
    +
    iterator() - Method in class mklab.JGNN.core.Slice
    +
     
    +
    iterator() - Method in class mklab.JGNN.core.Tensor
    +
     
    +
    iterator() - Method in class mklab.JGNN.core.util.Range
    +
     
    +
    iterator() - Method in class mklab.JGNN.core.util.Range2D
    +
     
    +
    +A B C D E F G H I K L M N O P R S T U V W X Z 
    All Classes and Interfaces|All Packages
    +
    +
    + + diff --git a/docs/javadoc/index.html b/docs/javadoc/index.html new file mode 100644 index 00000000..8183ef86 --- /dev/null +++ b/docs/javadoc/index.html @@ -0,0 +1,99 @@ + + + + +Overview + + + + + + + + + + + + + + + + + + diff --git a/docs/javadoc/jquery-ui.overrides.css b/docs/javadoc/jquery-ui.overrides.css new file mode 100644 index 00000000..1abff952 --- /dev/null +++ b/docs/javadoc/jquery-ui.overrides.css @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2020, Oracle and/or its affiliates. All rights reserved. + * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + */ + +.ui-state-active, +.ui-widget-content .ui-state-active, +.ui-widget-header .ui-state-active, +a.ui-button:active, +.ui-button:active, +.ui-button.ui-state-active:hover { + /* Overrides the color of selection used in jQuery UI */ + background: #F8981D; +} diff --git a/docs/javadoc/legal/COPYRIGHT b/docs/javadoc/legal/COPYRIGHT new file mode 100644 index 00000000..ca74fffd --- /dev/null +++ b/docs/javadoc/legal/COPYRIGHT @@ -0,0 +1 @@ +Please see ..\java.base\COPYRIGHT diff --git a/docs/javadoc/legal/LICENSE b/docs/javadoc/legal/LICENSE new file mode 100644 index 00000000..4ad9fe40 --- /dev/null +++ b/docs/javadoc/legal/LICENSE @@ -0,0 +1 @@ +Please see ..\java.base\LICENSE diff --git a/docs/javadoc/legal/jquery.md b/docs/javadoc/legal/jquery.md new file mode 100644 index 00000000..8054a34c --- /dev/null +++ b/docs/javadoc/legal/jquery.md @@ -0,0 +1,72 @@ +## jQuery v3.5.1 + +### jQuery License +``` +jQuery v 3.5.1 +Copyright JS Foundation and other contributors, https://js.foundation/ + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +****************************************** + +The jQuery JavaScript Library v3.5.1 also includes Sizzle.js + +Sizzle.js includes the following license: + +Copyright JS Foundation and other contributors, https://js.foundation/ + +This software consists of voluntary contributions made by many +individuals. For exact contribution history, see the revision history +available at https://github.com/jquery/sizzle + +The following license applies to all parts of this software except as +documented below: + +==== + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +==== + +All files located in the node_modules and external directories are +externally maintained libraries used by this software which have their +own licenses; we recommend you read them, as their terms may differ from +the terms above. + +********************* + +``` diff --git a/docs/javadoc/legal/jqueryUI.md b/docs/javadoc/legal/jqueryUI.md new file mode 100644 index 00000000..8031bdb5 --- /dev/null +++ b/docs/javadoc/legal/jqueryUI.md @@ -0,0 +1,49 @@ +## jQuery UI v1.12.1 + +### jQuery UI License +``` +Copyright jQuery Foundation and other contributors, https://jquery.org/ + +This software consists of voluntary contributions made by many +individuals. For exact contribution history, see the revision history +available at https://github.com/jquery/jquery-ui + +The following license applies to all parts of this software except as +documented below: + +==== + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +==== + +Copyright and related rights for sample code are waived via CC0. Sample +code is defined as all source code contained within the demos directory. + +CC0: http://creativecommons.org/publicdomain/zero/1.0/ + +==== + +All files located in the node_modules and external directories are +externally maintained libraries used by this software which have their +own licenses; we recommend you read them, as their terms may differ from +the terms above. + +``` diff --git a/docs/javadoc/member-search-index.js b/docs/javadoc/member-search-index.js new file mode 100644 index 00000000..942230ab --- /dev/null +++ b/docs/javadoc/member-search-index.js @@ -0,0 +1 @@ +memberSearchIndex = [{"p":"mklab.JGNN.core","c":"Tensor","l":"abs()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"abs()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessCol(long)"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"accessCol(long)"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"AccessCol(Matrix, long)","u":"%3Cinit%3E(mklab.JGNN.core.Matrix,long)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessColumns()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessColumns(Iterable)","u":"accessColumns(java.lang.Iterable)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessColumns(long...)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessColumns(Tensor)","u":"accessColumns(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessDim(long, String)","u":"accessDim(long,java.lang.String)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessRow(long)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"accessRow(long)"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"AccessRow(Matrix, long)","u":"%3Cinit%3E(mklab.JGNN.core.Matrix,long)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessRows()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessRows(Iterable)","u":"accessRows(java.lang.Iterable)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessRows(long...)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"accessRows(Tensor)","u":"accessRows(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"accessSubtensor(long)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"accessSubtensor(long, long)","u":"accessSubtensor(long,long)"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"AccessSubtensor(Tensor, long)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor,long)"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"AccessSubtensor(Tensor, long, long)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor,long,long)"},{"p":"mklab.JGNN.nn.loss","c":"Accuracy","l":"Accuracy()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"Adam()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"Adam(boolean, double)","u":"%3Cinit%3E(boolean,double)"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"Adam(boolean, double, double, double)","u":"%3Cinit%3E(boolean,double,double,double)"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"Adam(boolean, double, double, double, double)","u":"%3Cinit%3E(boolean,double,double,double,double)"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"Adam(double)","u":"%3Cinit%3E(double)"},{"p":"mklab.JGNN.nn.operations","c":"Add","l":"Add()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"add(double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"add(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"add(Tensor)","u":"add(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"add(Tensor)","u":"add(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"add(Tensor)","u":"add(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"addInput(NNOperation)","u":"addInput(mklab.JGNN.nn.NNOperation)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"addInput(NNOperation)","u":"addInput(mklab.JGNN.nn.NNOperation)"},{"p":"mklab.JGNN.nn","c":"Model","l":"addInput(Variable)","u":"addInput(mklab.JGNN.nn.inputs.Variable)"},{"p":"mklab.JGNN.nn","c":"Model","l":"addOutput(NNOperation)","u":"addOutput(mklab.JGNN.nn.NNOperation)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"aggregate(LSTM)","u":"aggregate(mklab.JGNN.nn.operations.LSTM)"},{"p":"mklab.JGNN.core","c":"Memory","l":"allocate(int, Object)","u":"allocate(int,java.lang.Object)"},{"p":"mklab.JGNN.nn","c":"Initializer","l":"apply(Model)","u":"apply(mklab.JGNN.nn.Model)"},{"p":"mklab.JGNN.nn.initializers","c":"VariancePreservingInitializer","l":"apply(Model)","u":"apply(mklab.JGNN.nn.Model)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"argmax()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"argmin()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"asColumn()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"asRow()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"assertBackwardValidity()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"assertFinite()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"assertMatching(Tensor)","u":"assertMatching(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"assertSize(long)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"assign(Tensor)","u":"assign(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Slice","l":"asTensor()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"asTransposed()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"asTransposed()"},{"p":"mklab.JGNN.nn.operations","c":"Attention","l":"Attention()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"autosize(List)","u":"autosize(java.util.List)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"autosize(Tensor...)","u":"autosize(mklab.JGNN.core.Tensor...)"},{"p":"mklab.JGNN.nn.optimizers","c":"BatchOptimizer","l":"BatchOptimizer(Optimizer)","u":"%3Cinit%3E(mklab.JGNN.nn.Optimizer)"},{"p":"mklab.JGNN.nn.optimizers","c":"BatchOptimizer","l":"BatchOptimizer(Optimizer, long)","u":"%3Cinit%3E(mklab.JGNN.nn.Optimizer,long)"},{"p":"mklab.JGNN.nn.loss","c":"BinaryCrossEntropy","l":"BinaryCrossEntropy()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.loss","c":"BinaryCrossEntropy","l":"BinaryCrossEntropy(double)","u":"%3Cinit%3E(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"cast(Class)","u":"cast(java.lang.Class)"},{"p":"mklab.JGNN.nn.loss","c":"CategoricalCrossEntropy","l":"CategoricalCrossEntropy()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.loss","c":"CategoricalCrossEntropy","l":"CategoricalCrossEntropy(double)","u":"%3Cinit%3E(double)"},{"p":"mklab.JGNN.adhoc.datasets","c":"Citeseer","l":"Citeseer()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc","c":"Dataset","l":"classes()"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"classify()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"clearPrediction()"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"ColumnRepetition(long, Tensor)","u":"%3Cinit%3E(long,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.operations","c":"Complement","l":"Complement()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.operations","c":"Concat","l":"Concat()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"concat(int)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"concat(int)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"config(String, double)","u":"config(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"config(String, double)","u":"config(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"config(String, double)","u":"config(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"config(String, double)","u":"config(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"config(String, String)","u":"config(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"config(String, String)","u":"config(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"configFrom(ModelBuilder)","u":"configFrom(mklab.JGNN.adhoc.ModelBuilder)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"constant(String, double)","u":"constant(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"constant(String, double)","u":"constant(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"constant(String, double)","u":"constant(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"constant(String, double)","u":"constant(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"constant(String, Tensor)","u":"constant(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"constant(String, Tensor)","u":"constant(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"constant(String, Tensor)","u":"constant(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"constant(String, Tensor)","u":"constant(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.inputs","c":"Constant","l":"Constant(Tensor)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"contains(Object)","u":"contains(java.lang.Object)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"copy()"},{"p":"mklab.JGNN.adhoc.datasets","c":"Cora","l":"Cora()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"createFirstState()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"createForwardValidity(List)","u":"createForwardValidity(java.util.List)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"crossEntropy(double, double)","u":"crossEntropy(double,double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"crossEntropyDerivative(double, double)","u":"crossEntropyDerivative(double,double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"crossEntropyDerivativeCategorical(double, double)","u":"crossEntropyDerivativeCategorical(double,double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"crossEntropySigmoidDerivative(double, double)","u":"crossEntropySigmoidDerivative(double,double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"crossEntropyTanhDerivative(double, double)","u":"crossEntropyTanhDerivative(double,double)"},{"p":"mklab.JGNN.adhoc","c":"Dataset","l":"Dataset()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"debugging"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"DenseMatrix(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"DenseTensor()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"DenseTensor(double...)","u":"%3Cinit%3E(double...)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"DenseTensor(Iterator)","u":"%3Cinit%3E(java.util.Iterator)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"DenseTensor(long)","u":"%3Cinit%3E(long)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"DenseTensor(String)","u":"%3Cinit%3E(java.lang.String)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"density()"},{"p":"mklab.JGNN.nn.loss","c":"Accuracy","l":"derivative(Tensor, Tensor)","u":"derivative(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.loss","c":"BinaryCrossEntropy","l":"derivative(Tensor, Tensor)","u":"derivative(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.loss","c":"CategoricalCrossEntropy","l":"derivative(Tensor, Tensor)","u":"derivative(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn","c":"Loss","l":"derivative(Tensor, Tensor)","u":"derivative(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.loss.report","c":"VerboseLoss","l":"derivative(Tensor, Tensor)","u":"derivative(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"describe()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"describe()"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"describe()"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"describe()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"describe()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"describe()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"describe()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"dot(Tensor)","u":"dot(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"dot(Tensor, Tensor)","u":"dot(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.operations","c":"Dropout","l":"Dropout()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"EmptyMatrix(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"EmptyTensor()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"EmptyTensor(long)","u":"%3Cinit%3E(long)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"endTape()"},{"p":"mklab.JGNN.core","c":"Memory.Scope","l":"enter()"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"estimateNumNonZeroElements()"},{"p":"mklab.JGNN.nn.loss","c":"Accuracy","l":"evaluate(Tensor, Tensor)","u":"evaluate(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.loss","c":"BinaryCrossEntropy","l":"evaluate(Tensor, Tensor)","u":"evaluate(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.loss","c":"CategoricalCrossEntropy","l":"evaluate(Tensor, Tensor)","u":"evaluate(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn","c":"Loss","l":"evaluate(Tensor, Tensor)","u":"evaluate(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.loss.report","c":"VerboseLoss","l":"evaluate(Tensor, Tensor)","u":"evaluate(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Memory.Scope","l":"exit()"},{"p":"mklab.JGNN.nn.activations","c":"Exp","l":"Exp()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"expMinusOne()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"expMinusOne()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"external(Tensor, Tensor)","u":"external(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"eye(long)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"FastBuilder()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"FastBuilder(Matrix, Matrix)","u":"%3Cinit%3E(mklab.JGNN.core.Matrix,mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"FastEntry()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"FastEntry(K, V)","u":"%3Cinit%3E(K,V)"},{"p":"mklab.JGNN.adhoc","c":"Dataset","l":"features()"},{"p":"mklab.JGNN.nn.activations","c":"L1","l":"forward(List)","u":"forward(java.util.List)"},{"p":"mklab.JGNN.nn.operations","c":"Attention","l":"forward(List)","u":"forward(java.util.List)"},{"p":"mklab.JGNN.nn.pooling","c":"Mean","l":"forward(List)","u":"forward(java.util.List)"},{"p":"mklab.JGNN.nn.pooling","c":"Sum","l":"forward(List)","u":"forward(java.util.List)"},{"p":"mklab.JGNN.nn.operations","c":"From","l":"From()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"fromDouble(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"fromDouble(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"fromRange(long)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"fromRange(long, long)","u":"fromRange(long,long)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"function(String, String)","u":"function(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"function(String, String)","u":"function(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"futureConfigs(String, Function, int)","u":"futureConfigs(java.lang.String,java.util.function.Function,int)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"futureConfigs(String, Function, int)","u":"futureConfigs(java.lang.String,java.util.function.Function,int)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"futureConstants(String, Function, int)","u":"futureConstants(java.lang.String,java.util.function.Function,int)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"futureConstants(String, Function, int)","u":"futureConstants(java.lang.String,java.util.function.Function,int)"},{"p":"mklab.JGNN.nn.operations","c":"Gather","l":"Gather()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"get()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"get(int)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"get(int)"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"get(long)"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"get(long)"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"get(long)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"get(long)"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"get(long)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"get(long)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"get(long)"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"get(long)"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"get(long)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"get(long)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"get(long, long)","u":"get(long,long)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"get(String)","u":"get(java.lang.String)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"getColName()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"getCols()"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"getColumn()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"getConfigOrDefault(String, double)","u":"getConfigOrDefault(java.lang.String,double)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"getConfigOrDefault(String, int)","u":"getConfigOrDefault(java.lang.String,int)"},{"p":"mklab.JGNN.core","c":"ThreadPool","l":"getCurrentThreadId()"},{"p":"mklab.JGNN.nn","c":"Model","l":"getDepthLastOperations()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getDescription()"},{"p":"mklab.JGNN.core","c":"Distribution","l":"getDeviation()"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"getDeviation()"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"getDeviation()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"getDimensionName()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"getDimensionSize(String)","u":"getDimensionSize(java.lang.String)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"getExecutionGraphDot()"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"getId(Object)","u":"getId(java.lang.Object)"},{"p":"mklab.JGNN.nn","c":"Model","l":"getInputs()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getInputs()"},{"p":"mklab.JGNN.core","c":"ThreadPool","l":"getInstance()"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"getKey()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getLastTapeError()"},{"p":"mklab.JGNN.core","c":"Distribution","l":"getMean()"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"getMean()"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"getMean()"},{"p":"mklab.JGNN.nn.operations","c":"LSTM.LSTMState","l":"getMemory()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"getModel()"},{"p":"mklab.JGNN.nn.activations","c":"Exp","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.activations","c":"LRelu","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.activations","c":"NExp","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.activations","c":"PRelu","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.activations","c":"Relu","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.activations","c":"Sigmoid","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.activations","c":"Tanh","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.operations","c":"MatMul","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.nn.operations","c":"Multiply","l":"getNonLinearity(int, double, double)","u":"getNonLinearity(int,double,double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"getNonZeroElements()"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"getNonZeroEntries()"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"getOptimizer()"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"getOrCreateId(Object)","u":"getOrCreateId(java.lang.Object)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM.LSTMState","l":"getOutput()"},{"p":"mklab.JGNN.nn","c":"Model","l":"getOutputs()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getOutputs()"},{"p":"mklab.JGNN.nn","c":"Model","l":"getParameters()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getPrediction()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"getRowName()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"getRows()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"getSimpleDescription()"},{"p":"mklab.JGNN.nn.operations","c":"Reshape","l":"getSimpleDescription()"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"getSlice()"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"getValue()"},{"p":"mklab.JGNN.nn.optimizers","c":"GradientDescent","l":"GradientDescent(double)","u":"%3Cinit%3E(double)"},{"p":"mklab.JGNN.nn.optimizers","c":"GradientDescent","l":"GradientDescent(double, double)","u":"%3Cinit%3E(double,double)"},{"p":"mklab.JGNN.adhoc","c":"Dataset","l":"graph()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"hasComponent(String)","u":"hasComponent(java.lang.String)"},{"p":"mklab.JGNN.core.util","c":"Range","l":"hasNext()"},{"p":"mklab.JGNN.core.util","c":"Range2D","l":"hasNext()"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"IdConverter()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"IdConverter(List)","u":"%3Cinit%3E(java.util.List)"},{"p":"mklab.JGNN.nn.operations","c":"Identity","l":"Identity()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn","c":"Model","l":"init(Initializer)","u":"init(mklab.JGNN.nn.Initializer)"},{"p":"mklab.JGNN.nn","c":"Initializer","l":"Initializer()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"inverse()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"inverse()"},{"p":"mklab.JGNN.nn.inputs","c":"Constant","l":"isCachable()"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"isCachable()"},{"p":"mklab.JGNN.nn.inputs","c":"Variable","l":"isCachable()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"isCachable()"},{"p":"mklab.JGNN.nn.operations","c":"Dropout","l":"isCachable()"},{"p":"mklab.JGNN.nn.operations","c":"From","l":"isCachable()"},{"p":"mklab.JGNN.nn.operations","c":"Reshape","l":"isCachable()"},{"p":"mklab.JGNN.nn.operations","c":"To","l":"isCachable()"},{"p":"mklab.JGNN.nn.inputs","c":"Constant","l":"isConstant()"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"isConstant()"},{"p":"mklab.JGNN.nn.inputs","c":"Variable","l":"isConstant()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"isConstant()"},{"p":"mklab.JGNN.nn.operations","c":"Dropout","l":"isEnabled()"},{"p":"mklab.JGNN.core","c":"Slice","l":"iterator()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"iterator()"},{"p":"mklab.JGNN.core.util","c":"Range","l":"iterator()"},{"p":"mklab.JGNN.core.util","c":"Range2D","l":"iterator()"},{"p":"mklab.JGNN.nn.initializers","c":"KaimingNormal","l":"KaimingNormal()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.initializers","c":"KaimingUniform","l":"KaimingUniform()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.activations","c":"L1","l":"L1()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.activations","c":"L1","l":"L1(boolean)","u":"%3Cinit%3E(boolean)"},{"p":"mklab.JGNN.adhoc","c":"Dataset","l":"labels()"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"layer(String)","u":"layer(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"layer(String)","u":"layer(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"LayeredBuilder()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"LayeredBuilder(String)","u":"%3Cinit%3E(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"layerRepeat(String, int)","u":"layerRepeat(java.lang.String,int)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"layerRepeat(String, int)","u":"layerRepeat(java.lang.String,int)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"load(Path)","u":"load(java.nio.file.Path)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"log()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"log()"},{"p":"mklab.JGNN.nn.operations","c":"Log","l":"Log()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn","c":"Loss","l":"Loss()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.activations","c":"LRelu","l":"LRelu()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"LSTM(Optimizer, int, int)","u":"%3Cinit%3E(mklab.JGNN.nn.Optimizer,int,int)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM.LSTMState","l":"LSTMState(Tensor, Tensor)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.operations","c":"MatMul","l":"MatMul()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"matmul(Matrix)","u":"matmul(mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"matmul(Matrix)","u":"matmul(mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"matmul(Matrix)","u":"matmul(mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"matmul(Matrix, boolean, boolean)","u":"matmul(mklab.JGNN.core.Matrix,boolean,boolean)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"matmul(Matrix, boolean, boolean)","u":"matmul(mklab.JGNN.core.Matrix,boolean,boolean)"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"matmul(Matrix, boolean, boolean)","u":"matmul(mklab.JGNN.core.Matrix,boolean,boolean)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"max()"},{"p":"mklab.JGNN.nn.pooling","c":"Max","l":"Max()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.pooling","c":"Max","l":"Max(boolean)","u":"%3Cinit%3E(boolean)"},{"p":"mklab.JGNN.nn.pooling","c":"Mean","l":"Mean()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.pooling","c":"Mean","l":"Mean(boolean)","u":"%3Cinit%3E(boolean)"},{"p":"mklab.JGNN.core","c":"Memory","l":"Memory()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"min()"},{"p":"mklab.JGNN.nn","c":"Model","l":"Model()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"ModelBuilder()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"ModelBuilder(Model)","u":"%3Cinit%3E(mklab.JGNN.nn.Model)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"ModelTraining()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.operations","c":"Multiply","l":"Multiply()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"multiply(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"multiply(double)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"multiply(double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"multiply(Tensor)","u":"multiply(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"multiply(Tensor)","u":"multiply(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"multiply(Tensor)","u":"multiply(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"negative()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"negative()"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"Neuralang()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.activations","c":"NExp","l":"NExp()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.util","c":"Range","l":"next()"},{"p":"mklab.JGNN.core.util","c":"Range2D","l":"next()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"norm()"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"Normal()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"Normal(double, double)","u":"%3Cinit%3E(double,double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"normalized()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"onesMask()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"operation(String)","u":"operation(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"operation(String)","u":"operation(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"operation(String)","u":"operation(java.lang.String)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"out(String)","u":"out(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"out(String)","u":"out(java.lang.String)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"output(Tensor, LSTM.LSTMState)","u":"output(mklab.JGNN.core.Tensor,mklab.JGNN.nn.operations.LSTM.LSTMState)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"param(String, double, Tensor)","u":"param(java.lang.String,double,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"param(String, double, Tensor)","u":"param(java.lang.String,double,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"param(String, double, Tensor)","u":"param(java.lang.String,double,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"param(String, Tensor)","u":"param(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"param(String, Tensor)","u":"param(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"param(String, Tensor)","u":"param(java.lang.String,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"Parameter(Tensor)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"Parameter(Tensor, double)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor,double)"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"parse(Path)","u":"parse(java.nio.file.Path)"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"parse(String)","u":"parse(java.lang.String)"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"persist()"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"persist()"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"persist()"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"persist()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"persist()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"persist()"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"persist()"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"persist()"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"persist()"},{"p":"mklab.JGNN.nn","c":"Model","l":"predict(List)","u":"predict(java.util.List)"},{"p":"mklab.JGNN.nn","c":"Model","l":"predict(Tensor...)","u":"predict(mklab.JGNN.core.Tensor...)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"predict(Tensor[])","u":"predict(mklab.JGNN.core.Tensor[])"},{"p":"mklab.JGNN.nn.activations","c":"PRelu","l":"PRelu()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"print()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"printState()"},{"p":"mklab.JGNN.adhoc.datasets","c":"Pubmed","l":"Pubmed()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"put(int, double)","u":"put(int,double)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"put(int, double)","u":"put(int,double)"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"put(long, double)","u":"put(long,double)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"put(long, long, double)","u":"put(long,long,double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"putAdd(int, double)","u":"putAdd(int,double)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"putAdd(int, double)","u":"putAdd(int,double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"putAdd(long, double)","u":"putAdd(long,double)"},{"p":"mklab.JGNN.core","c":"Slice","l":"range(double, double)","u":"range(double,double)"},{"p":"mklab.JGNN.core","c":"Slice","l":"range(int, int)","u":"range(int,int)"},{"p":"mklab.JGNN.core.util","c":"Range","l":"Range(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.util","c":"Range2D","l":"Range2D(long, long, long, long)","u":"%3Cinit%3E(long,long,long,long)"},{"p":"mklab.JGNN.nn.operations","c":"Reduce","l":"Reduce()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Memory.Scope","l":"register(double[])"},{"p":"mklab.JGNN.nn.optimizers","c":"Regularization","l":"Regularization(Optimizer, double)","u":"%3Cinit%3E(mklab.JGNN.nn.Optimizer,double)"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"release()"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"release()"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"release()"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"release()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"release()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"release()"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"release()"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"release()"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"release()"},{"p":"mklab.JGNN.core","c":"Memory","l":"release(double[])"},{"p":"mklab.JGNN.nn.activations","c":"Relu","l":"Relu()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"relu(double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"relu(Tensor)","u":"relu(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"reluDerivative(double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"reluDerivative(Tensor)","u":"reluDerivative(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc.parsers","c":"FastBuilder","l":"rememberAs(String)","u":"rememberAs(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"rememberAs(String)","u":"rememberAs(java.lang.String)"},{"p":"mklab.JGNN.nn.operations","c":"Repeat","l":"Repeat()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"RepeatMatrix(double, long, long)","u":"%3Cinit%3E(double,long,long)"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"RepeatTensor(double, long)","u":"%3Cinit%3E(double,long)"},{"p":"mklab.JGNN.nn.loss.report","c":"VerboseLoss","l":"reset()"},{"p":"mklab.JGNN.nn","c":"Optimizer","l":"reset()"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"reset()"},{"p":"mklab.JGNN.nn.optimizers","c":"BatchOptimizer","l":"reset()"},{"p":"mklab.JGNN.nn.optimizers","c":"GradientDescent","l":"reset()"},{"p":"mklab.JGNN.nn.optimizers","c":"Regularization","l":"reset()"},{"p":"mklab.JGNN.nn.operations","c":"Reshape","l":"Reshape(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"RowRepetition(Tensor, long)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor,long)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"run(List)","u":"run(java.util.List)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"run(Tensor...)","u":"run(mklab.JGNN.core.Tensor...)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"runModel(ArrayList)","u":"runModel(java.util.ArrayList)"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"runModel(Tensor...)","u":"runModel(mklab.JGNN.core.Tensor...)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"runPrediction()"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"runPredictionAndAutosize()"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"sample()"},{"p":"mklab.JGNN.core","c":"Distribution","l":"sample()"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"sample()"},{"p":"mklab.JGNN.adhoc","c":"Dataset","l":"samples()"},{"p":"mklab.JGNN.core","c":"Slice","l":"samplesAsFeatures()"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"save(Path)","u":"save(java.nio.file.Path)"},{"p":"mklab.JGNN.core","c":"Memory","l":"scope()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfAbs()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfAbs()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfAdd(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfAdd(double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfAdd(Tensor)","u":"selfAdd(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfAdd(Tensor)","u":"selfAdd(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"selfAdd(Tensor)","u":"selfAdd(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfAdd(Tensor, double)","u":"selfAdd(mklab.JGNN.core.Tensor,double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfExpMinusOne()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfExpMinusOne()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfInverse()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfInverse()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfLog()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfLog()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfMultiply(double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfMultiply(double)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"selfMultiply(double)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfMultiply(Tensor)","u":"selfMultiply(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfMultiply(Tensor)","u":"selfMultiply(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"selfMultiply(Tensor)","u":"selfMultiply(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfNegative()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfNegative()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfSqrt()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfSqrt()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"selfSubtract(Tensor)","u":"selfSubtract(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"selfSubtract(Tensor)","u":"selfSubtract(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"selfSubtract(Tensor)","u":"selfSubtract(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.inputs","c":"Parameter","l":"set(Tensor)","u":"set(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setColName(String)","u":"setColName(java.lang.String)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"setDescription(String)","u":"setDescription(java.lang.String)"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"setDeviation(double)"},{"p":"mklab.JGNN.core","c":"Distribution","l":"setDeviation(double)"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"setDeviation(double)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setDiagonal(long, double)","u":"setDiagonal(long,double)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setDimensionName(String)","u":"setDimensionName(java.lang.String)"},{"p":"mklab.JGNN.nn.pooling","c":"Sort","l":"setDimensionName(String)","u":"setDimensionName(java.lang.String)"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"setDimensionName(String, String)","u":"setDimensionName(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setDimensionName(String, String)","u":"setDimensionName(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.nn.operations","c":"Reshape","l":"setDimensionName(String, String)","u":"setDimensionName(java.lang.String,java.lang.String)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setDimensionName(Tensor)","u":"setDimensionName(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setDimensionName(Tensor)","u":"setDimensionName(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.operations","c":"Dropout","l":"setEnabled(boolean)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setEpochs(int)"},{"p":"mklab.JGNN.nn.loss.report","c":"VerboseLoss","l":"setInterval(int)"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"setKey(K)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setLoss(Loss)","u":"setLoss(mklab.JGNN.nn.Loss)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setMainDiagonal(double)"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"setMean(double)"},{"p":"mklab.JGNN.core","c":"Distribution","l":"setMean(double)"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"setMean(double)"},{"p":"mklab.JGNN.nn.loss","c":"CategoricalCrossEntropy","l":"setMeanReduction(boolean)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setNumBatches(int)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setOptimizer(Optimizer)","u":"setOptimizer(mklab.JGNN.nn.Optimizer)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setParallelizedStochasticGradientDescent(boolean)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setPatience(int)"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"setRange(double, double)","u":"setRange(double,double)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setRowName(String)","u":"setRowName(java.lang.String)"},{"p":"mklab.JGNN.core.distribution","c":"Normal","l":"setSeed(long)"},{"p":"mklab.JGNN.core","c":"Distribution","l":"setSeed(long)"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"setSeed(long)"},{"p":"mklab.JGNN.nn.loss.report","c":"VerboseLoss","l":"setStream(PrintStream)","u":"setStream(java.io.PrintStream)"},{"p":"mklab.JGNN.nn.inputs","c":"Variable","l":"setTo(Tensor)","u":"setTo(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setToASymmetricNormalization()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToNormalized()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToOnes()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToProbability()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToRandom()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToRandom(Distribution)","u":"setToRandom(mklab.JGNN.core.Distribution)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"setToSymmetricNormalization()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToUniform()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"setToZero()"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setValidationLoss(Loss)","u":"setValidationLoss(mklab.JGNN.nn.Loss)"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"setValue(V)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"setVerbose(boolean)"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"setZeroCopyType(Matrix)","u":"setZeroCopyType(mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"setZeroCopyType(Matrix)","u":"setZeroCopyType(mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core","c":"Slice","l":"shuffle()"},{"p":"mklab.JGNN.core","c":"Slice","l":"shuffle(int)"},{"p":"mklab.JGNN.nn.activations","c":"Sigmoid","l":"Sigmoid()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"sigmoid(double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"sigmoid(Tensor)","u":"sigmoid(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"sigmoidDerivative(double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"sigmoidDerivative(Tensor)","u":"sigmoidDerivative(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.adhoc","c":"IdConverter","l":"size()"},{"p":"mklab.JGNN.core","c":"Slice","l":"size()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"size()"},{"p":"mklab.JGNN.core","c":"Slice","l":"Slice(Iterable)","u":"%3Cinit%3E(java.lang.Iterable)"},{"p":"mklab.JGNN.nn.pooling","c":"SoftMax","l":"SoftMax()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.pooling","c":"SoftMax","l":"SoftMax(boolean)","u":"%3Cinit%3E(boolean)"},{"p":"mklab.JGNN.core.util","c":"Sort","l":"Sort()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.pooling","c":"Sort","l":"Sort(int)","u":"%3Cinit%3E(int)"},{"p":"mklab.JGNN.core.util","c":"Sort","l":"sortedIndexes(ArrayList)","u":"sortedIndexes(java.util.ArrayList)"},{"p":"mklab.JGNN.core.util","c":"Sort","l":"sortedIndexes(double[])"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"SparseMatrix(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"SparseSymmetric(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"SparseTensor()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"SparseTensor(long)","u":"%3Cinit%3E(long)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"SPECIES"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"sqrt()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"sqrt()"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"startTape()"},{"p":"mklab.JGNN.core","c":"ThreadPool","l":"submit(Runnable)","u":"submit(java.lang.Runnable)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"subtract(Tensor)","u":"subtract(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"subtract(Tensor)","u":"subtract(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"subtract(Tensor)","u":"subtract(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"sum()"},{"p":"mklab.JGNN.nn.pooling","c":"Sum","l":"Sum()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.pooling","c":"Sum","l":"Sum(boolean)","u":"%3Cinit%3E(boolean)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"symmetricNormalization()"},{"p":"mklab.JGNN.nn.activations","c":"Tanh","l":"Tanh()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"tanh(double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"tanh(Tensor)","u":"tanh(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"tanhDerivative(double)"},{"p":"mklab.JGNN.core.util","c":"Loss","l":"tanhDerivative(Tensor)","u":"tanhDerivative(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"tensor"},{"p":"mklab.JGNN.core","c":"Tensor","l":"Tensor(long)","u":"%3Cinit%3E(long)"},{"p":"mklab.JGNN.nn.operations","c":"To","l":"To()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"toArray()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"toDense()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"toDouble()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"toNonZeroString()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"toProbability()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"toSparse()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"toString()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"toString()"},{"p":"mklab.JGNN.core.util","c":"FastEntry","l":"toString()"},{"p":"mklab.JGNN.nn","c":"Model","l":"train(Loss, Optimizer, List, List)","u":"train(mklab.JGNN.nn.Loss,mklab.JGNN.nn.Optimizer,java.util.List,java.util.List)"},{"p":"mklab.JGNN.nn","c":"Model","l":"train(Loss, Optimizer, List, List, List)","u":"train(mklab.JGNN.nn.Loss,mklab.JGNN.nn.Optimizer,java.util.List,java.util.List,java.util.List)"},{"p":"mklab.JGNN.nn","c":"ModelTraining","l":"train(Model, Matrix, Matrix, Slice, Slice)","u":"train(mklab.JGNN.nn.Model,mklab.JGNN.core.Matrix,mklab.JGNN.core.Matrix,mklab.JGNN.core.Slice,mklab.JGNN.core.Slice)"},{"p":"mklab.JGNN.nn","c":"Model","l":"train(ModelTraining, Matrix, Matrix, Slice, Slice)","u":"train(mklab.JGNN.nn.ModelTraining,mklab.JGNN.core.Matrix,mklab.JGNN.core.Matrix,mklab.JGNN.core.Slice,mklab.JGNN.core.Slice)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"train(Tensor[], Tensor)","u":"train(mklab.JGNN.core.Tensor[],mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"trainOnOutputError(Tensor[], Tensor)","u":"trainOnOutputError(mklab.JGNN.core.Tensor[],mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn","c":"Model","l":"trainTowardsZero(Optimizer, List)","u":"trainTowardsZero(mklab.JGNN.nn.Optimizer,java.util.List)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"transform(Tensor)","u":"transform(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.operations","c":"Transpose","l":"Transpose()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"transposed()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"TransposedMatrix(Matrix)","u":"%3Cinit%3E(mklab.JGNN.core.Matrix)"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"traverseNonZeroElements()"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"Uniform()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.distribution","c":"Uniform","l":"Uniform(double, double)","u":"%3Cinit%3E(double,double)"},{"p":"mklab.JGNN.core","c":"Memory.Scope","l":"unregister(double[])"},{"p":"mklab.JGNN.nn","c":"Optimizer","l":"update(Tensor, Tensor)","u":"update(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.optimizers","c":"Adam","l":"update(Tensor, Tensor)","u":"update(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.optimizers","c":"BatchOptimizer","l":"update(Tensor, Tensor)","u":"update(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.optimizers","c":"GradientDescent","l":"update(Tensor, Tensor)","u":"update(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.optimizers","c":"Regularization","l":"update(Tensor, Tensor)","u":"update(mklab.JGNN.core.Tensor,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.nn.optimizers","c":"BatchOptimizer","l":"updateAll()"},{"p":"mklab.JGNN.nn.operations","c":"LSTM","l":"updateTape(Tensor, LSTM.LSTMState, Tensor)","u":"updateTape(mklab.JGNN.core.Tensor,mklab.JGNN.nn.operations.LSTM.LSTMState,mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"values"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"values"},{"p":"mklab.JGNN.adhoc","c":"ModelBuilder","l":"var(String)","u":"var(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"LayeredBuilder","l":"var(String)","u":"var(java.lang.String)"},{"p":"mklab.JGNN.adhoc.parsers","c":"Neuralang","l":"var(String)","u":"var(java.lang.String)"},{"p":"mklab.JGNN.nn.inputs","c":"Variable","l":"Variable()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.initializers","c":"VariancePreservingInitializer","l":"VariancePreservingInitializer()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"vectorization"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"VectorizedMatrix(long, long)","u":"%3Cinit%3E(long,long)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"VectorizedTensor()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"VectorizedTensor(double...)","u":"%3Cinit%3E(double...)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"VectorizedTensor(Iterator)","u":"%3Cinit%3E(java.util.Iterator)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"VectorizedTensor(long)","u":"%3Cinit%3E(long)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"VectorizedTensor(String)","u":"%3Cinit%3E(java.lang.String)"},{"p":"mklab.JGNN.nn.loss.report","c":"VerboseLoss","l":"VerboseLoss(Loss)","u":"%3Cinit%3E(mklab.JGNN.nn.Loss)"},{"p":"mklab.JGNN.nn","c":"NNOperation","l":"view()"},{"p":"mklab.JGNN.core","c":"ThreadPool","l":"waitForConclusion()"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"WrapCols(List)","u":"%3Cinit%3E(java.util.List)"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"WrapCols(Tensor...)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor...)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"WrapRows(List)","u":"%3Cinit%3E(java.util.List)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"WrapRows(Tensor...)","u":"%3Cinit%3E(mklab.JGNN.core.Tensor...)"},{"p":"mklab.JGNN.nn.initializers","c":"XavierNormal","l":"XavierNormal()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.nn.initializers","c":"XavierUniform","l":"XavierUniform()","u":"%3Cinit%3E()"},{"p":"mklab.JGNN.core","c":"Matrix","l":"zeroCopy()"},{"p":"mklab.JGNN.core","c":"Tensor","l":"zeroCopy()"},{"p":"mklab.JGNN.core.empy","c":"EmptyTensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.matrix","c":"AccessCol","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.matrix","c":"AccessRow","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.tensor","c":"AccessSubtensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.tensor","c":"DenseTensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.tensor","c":"RepeatTensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.tensor","c":"SparseTensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.tensor","c":"VectorizedTensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"zeroCopy(long)"},{"p":"mklab.JGNN.core.empy","c":"EmptyMatrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"ColumnRepetition","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"DenseMatrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"Diagonal","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"RepeatMatrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"RowRepetition","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"SparseMatrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"SparseSymmetric","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"TransposedMatrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"VectorizedMatrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"WrapCols","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core.matrix","c":"WrapRows","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"zeroCopy(long, long)","u":"zeroCopy(long,long)"},{"p":"mklab.JGNN.core","c":"Matrix","l":"zeroCopy(Tensor)","u":"zeroCopy(mklab.JGNN.core.Tensor)"},{"p":"mklab.JGNN.core","c":"Tensor","l":"zeroCopy(Tensor)","u":"zeroCopy(mklab.JGNN.core.Tensor)"}];updateSearchResults(); \ No newline at end of file diff --git a/docs/javadoc/mklab/JGNN/adhoc/Dataset.html b/docs/javadoc/mklab/JGNN/adhoc/Dataset.html new file mode 100644 index 00000000..cba47862 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/Dataset.html @@ -0,0 +1,256 @@ + + + + +Dataset + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Dataset

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.Dataset
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    Citeseer, Cora, Pubmed
    +
    +
    +
    public class Dataset +extends Object
    +
    This class provides the backbone with which to define datasets. + It provides common operations for downloading and importing data.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Dataset

        +
        public Dataset()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        samples

        +
        public IdConverter samples()
        +
        Retrieves a converter that maps samples to long identifiers that match them to + rows of features(), labels(), and graph() matrices. + For example, a list of all node ids can be obtained per + dataset.samples().getIds()
        +
        +
        Returns:
        +
        A IdConverter.
        +
        +
        +
      • +
      • +
        +

        classes

        +
        public IdConverter classes()
        +
        Retrieves a converter that maps class names to label dimentions. + For example, the prediction for one sample can be converted to its name + per dataset.classes().get(prediction.argmax()).
        +
        +
        Returns:
        +
        An IdConverter.
        +
        +
        +
      • +
      • +
        +

        features

        +
        public Matrix features()
        +
        Retrieves the dataset's feature matrix.
        +
        +
        Returns:
        +
        A nodes x features Matrix.
        +
        +
        +
      • +
      • +
        +

        labels

        +
        public Matrix labels()
        +
        Retrieves the dataset's sample labels in one-hot encoding.
        +
        +
        Returns:
        +
        A nodes x classes Matrix.
        +
        +
        +
      • +
      • +
        +

        graph

        +
        public Matrix graph()
        +
        Retrieves the dataset's graph.
        +
        +
        Returns:
        +
        A Matrix or null if the dataset is feature-only.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/IdConverter.html b/docs/javadoc/mklab/JGNN/adhoc/IdConverter.html new file mode 100644 index 00000000..b820baad --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/IdConverter.html @@ -0,0 +1,319 @@ + + + + +IdConverter + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class IdConverter

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.IdConverter
    +
    +
    +
    +
    public class IdConverter +extends Object
    +
    Converts back-and-forth between objects and unique ids.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        IdConverter

        +
        public IdConverter()
        +
        Instantiates an empty converter to be filled with getOrCreateId(Object).
        +
        +
      • +
      • +
        +

        IdConverter

        +
        public IdConverter(List<?> objects)
        +
        Instantiates the converter on a list of objects to register + with getOrCreateId(Object) on.
        +
        +
        Parameters:
        +
        objects - A list of objects.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setDimensionName

        +
        public IdConverter setDimensionName(String nodeDimensionName, + String featureDimensionName)
        +
        Sets dimension names for one-hot encodings.
        +
        +
        Parameters:
        +
        nodeDimensionName - The dimension name for traversing nodes (e.g. "node").
        +
        featureDimensionName - The dimension name for traversing features (e.g. "label").
        +
        Returns:
        +
        this instance
        +
        +
        +
      • +
      • +
        +

        getOrCreateId

        +
        public long getOrCreateId(Object object)
        +
        Retrieves an identifier for a given object, creating one if none exists.
        +
        +
        Parameters:
        +
        object - The object for which to obtain an identifier.
        +
        Returns:
        +
        A long identifier.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        get

        +
        public Object get(long id)
        +
        Retrieves the object corresponding to a given identifier.
        +
        +
        Parameters:
        +
        id - The identifier of the object.
        +
        Returns:
        +
        The object.
        +
        +
        +
      • +
      • +
        +

        getId

        +
        public long getId(Object object)
        +
        Retrieves an identifier.
        +
        +
        Parameters:
        +
        object - An object with a registered identifier.
        +
        Returns:
        +
        A long identifier.
        +
        Throws:
        +
        Exception - if the identifiers does not exist.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        size

        +
        public long size()
        +
        The number of registered identifiers.
        +
        +
        Returns:
        +
        A long value.
        +
        +
        +
      • +
      • +
        +

        contains

        +
        public boolean contains(Object object)
        +
        Checks whether the object has been registered with getOrCreateId(Object).
        +
        +
        Parameters:
        +
        object - An object to check if it exists.
        +
        Returns:
        +
        A boolean value.
        +
        +
        +
      • +
      • +
        +

        getSlice

        +
        public Slice getSlice()
        +
        Returns a slice of all registered identifiers. + The slice is persistent across multiple calls to this method, but is + instantiated anew after getOrCreateId(Object) registers a new + object (but not if it retrieves an existing object).
        +
        +
        Returns:
        +
        A Slice.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/ModelBuilder.html b/docs/javadoc/mklab/JGNN/adhoc/ModelBuilder.html new file mode 100644 index 00000000..b3b5f602 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/ModelBuilder.html @@ -0,0 +1,716 @@ + + + + +ModelBuilder + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class ModelBuilder

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.ModelBuilder
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    FastBuilder, LayeredBuilder, Neuralang
    +
    +
    +
    public class ModelBuilder +extends Object
    +
    This class and subclasses can be used to create Model instances + by automatically creating and managing NNOperation instances based on + textual descriptions.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        ModelBuilder

        +
        public ModelBuilder()
        +
        +
      • +
      • +
        +

        ModelBuilder

        +
        public ModelBuilder(Model model)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getModel

        +
        public Model getModel()
        +
        Retrieves the model currently built by the builder. + This can changed depending on additional building method calls.
        +
        +
        Returns:
        +
        A Model instance.
        +
        +
        +
      • +
      • +
        +

        save

        +
        public ModelBuilder save(Path path)
        +
        +
      • +
      • +
        +

        load

        +
        public static ModelBuilder load(Path path)
        +
        +
      • +
      • +
        +

        hasComponent

        +
        public boolean hasComponent(String name)
        +
        Checks whether the builder has added to its managed model a component of + the given name.
        +
        +
        Parameters:
        +
        name - The component name to check for.
        +
        Returns:
        +
        a boolean value
        +
        +
        +
      • +
      • +
        +

        var

        +
        public ModelBuilder var(String name)
        +
        Declares a component with the given name to be used as an input + of the managed model.
        +
        +
        Parameters:
        +
        name - The name of the component.
        +
        Returns:
        +
        The builder's instance.
        +
        +
        +
      • +
      • +
        +

        out

        +
        public ModelBuilder out(String name)
        +
        Declares the component with the given name an output of the + managed model. The component should have already been assigned a value. + To output complex expressions use operation(String) + to define them first.
        +
        +
        Parameters:
        +
        name - A component name.
        +
        Returns:
        +
        The builder's instance.
        +
        +
        +
      • +
      • +
        +

        param

        +
        public ModelBuilder param(String name, + double regularization, + Tensor value)
        +
        Declares a learnable Paramater component with the given name, + learning L2 regularization, and initial value.
        +
        +
        Parameters:
        +
        name - The name to be assigned to the new component.
        +
        regularization - The regularization value. Zero corresponds to no regularization. + Typically, this is non-negative.
        +
        value - The initial value to be assigned to the parameter. Exact values + can be overridden by neural initialization strategies, but an initial value + should be declared nonetheless to determine the parameter type and allocate + any necessary memory.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        config

        +
        public ModelBuilder config(String name, + double value)
        +
        Declares a configuration hyperparameter, which can be used to declare + matrix and vector parameters during operation(String) expressions. + For in-expression use of hyperparameters, delcare them with constant(String, double).
        +
        +
        Parameters:
        +
        name - The name of the configuration hyperparameter.
        +
        value - The value to be assigned to the hyperparameter. + Typically, provide a long number.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        config

        +
        public ModelBuilder config(String name, + String value)
        +
        +
      • +
      • +
        +

        getConfigOrDefault

        +
        public int getConfigOrDefault(String name, + int defaultValue)
        +
        +
      • +
      • +
        +

        getConfigOrDefault

        +
        public double getConfigOrDefault(String name, + double defaultValue)
        +
        +
      • +
      • +
        +

        param

        +
        public ModelBuilder param(String name, + Tensor value)
        +
        Declares a learnable mklab.JGNN.nn.inputs.Paramater component with the given name, + zero regularization, and initial value.
        +
        +
        Parameters:
        +
        name - The name to be assigned to the new component.
        +
        value - The initial value to be assigned to the parameter. Exact values + can be overridden by neural initialization strategies, but an initial value + should be declared nonetheless to determine the parameter type and allocate + any necessary memory.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        constant

        +
        public ModelBuilder constant(String name, + double value)
        +
        Declares a non-learnable constant component with the given name. + This can be used in computations. To edit the constant's values, + use get(String) to retrieve the respective component.
        +
        +
        Parameters:
        +
        name - The name of the constant component.
        +
        value - A double value to assign to the constant.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        constant

        +
        public ModelBuilder constant(String name, + Tensor value)
        +
        Declares a non-learnable constant component with the given name. + This can be used in computations. To edit the constant's values, + use get(String) to retrieve the respective component.
        +
        +
        Parameters:
        +
        name - The name of the constant component.
        +
        value - A Tensor value to assign to the constant.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        get

        +
        public NNOperation get(String name)
        +
        Retrieves the NNOperation registered with the provided + name, for example to investigates its value.
        +
        +
        Parameters:
        +
        name - The name of the component.
        +
        Returns:
        +
        A NNOperation.
        +
        +
        +
      • +
      • +
        +

        runModel

        +
        public ModelBuilder runModel(Tensor... inputs)
        +
        This is a wrapper for getModel().predict(inputs) + without returning output values (use get(String) + afterwards to view outputs.
        +
        +
        Parameters:
        +
        inputs - A variable number of Tensor inputs.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        runModel

        +
        public ModelBuilder runModel(ArrayList<Tensor> inputs)
        +
        This is a wrapper for getModel().predict(inputs) + without returning output values (use get(String) + afterwards to view outputs.
        +
        +
        Parameters:
        +
        inputs - A list of Tensor inputs.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        function

        +
        public ModelBuilder function(String name, + String value)
        +
        +
      • +
      • +
        +

        operation

        +
        public ModelBuilder operation(String desc)
        +
        Parses one or more operations split by new line characters or ; + to add to the execution graph. All operations should assign a + value to a new component name and comprise operators and functions. + For a detailed description of the domain-specific language this + method accepts, please refer to the library's + + online documentation.
        +
        +
        Parameters:
        +
        desc - The operation to parse.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        autosize

        +
        public ModelBuilder autosize(Tensor... inputs)
        +
        +
      • +
      • +
        +

        autosize

        +
        public ModelBuilder autosize(List<Tensor> inputs)
        +
        +
      • +
      • +
        +

        createForwardValidity

        +
        public ModelBuilder createForwardValidity(List<Tensor> inputs)
        +
        Asserts that all components parsed into a call graph with + operation(String) are eventually used by at least one out(String) + component.
        +
        +
        Returns:
        +
        The builder's instance.
        +
        Throws:
        +
        RuntimeException - if not all execution graph branches lead to declared outputs.
        +
        +
        +
      • +
      • +
        +

        assertBackwardValidity

        +
        public ModelBuilder assertBackwardValidity()
        +
        Asserts that all components parsed into a call graph with + operation(String) are eventually used by at least one out(String) + component.
        +
        +
        Returns:
        +
        The builder's instance.
        +
        Throws:
        +
        RuntimeException - if not all execution graph branches lead to declared outputs.
        +
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Creates a description of the builded model's internal execution graph.
        +
        +
        Returns:
        +
        A String.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getExecutionGraphDot

        +
        public String getExecutionGraphDot()
        +
        Exports the builded model's execution graph into a .dot format + representation.
        +
        +
        Returns:
        +
        A String to be pasted into GraphViz for visualization.
        +
        +
        +
      • +
      • +
        +

        print

        +
        public ModelBuilder print()
        +
        +
      • +
      • +
        +

        printState

        +
        public ModelBuilder printState()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/class-use/Dataset.html b/docs/javadoc/mklab/JGNN/adhoc/class-use/Dataset.html new file mode 100644 index 00000000..7f83ffd8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/class-use/Dataset.html @@ -0,0 +1,96 @@ + + + + +Uses of Class mklab.JGNN.adhoc.Dataset + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.Dataset

    +
    +
    Packages that use Dataset
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/class-use/IdConverter.html b/docs/javadoc/mklab/JGNN/adhoc/class-use/IdConverter.html new file mode 100644 index 00000000..2fa151fd --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/class-use/IdConverter.html @@ -0,0 +1,98 @@ + + + + +Uses of Class mklab.JGNN.adhoc.IdConverter + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.IdConverter

    +
    +
    Packages that use IdConverter
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/class-use/ModelBuilder.html b/docs/javadoc/mklab/JGNN/adhoc/class-use/ModelBuilder.html new file mode 100644 index 00000000..2bb86080 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/class-use/ModelBuilder.html @@ -0,0 +1,231 @@ + + + + +Uses of Class mklab.JGNN.adhoc.ModelBuilder + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.ModelBuilder

    +
    +
    Packages that use ModelBuilder
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/Citeseer.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/Citeseer.html new file mode 100644 index 00000000..e624d6d8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/Citeseer.html @@ -0,0 +1,139 @@ + + + + +Citeseer + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Citeseer

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.Dataset +
    mklab.JGNN.adhoc.datasets.Citeseer
    +
    +
    +
    +
    +
    public class Citeseer +extends Dataset
    +
    Downloads and constructs the Citeseer node classification Dataset.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Citeseer

        +
        public Citeseer()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/Cora.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/Cora.html new file mode 100644 index 00000000..7b6e549c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/Cora.html @@ -0,0 +1,139 @@ + + + + +Cora + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Cora

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.Dataset +
    mklab.JGNN.adhoc.datasets.Cora
    +
    +
    +
    +
    +
    public class Cora +extends Dataset
    +
    Downloads and constructs the Cora node classification Dataset.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Cora

        +
        public Cora()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/Pubmed.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/Pubmed.html new file mode 100644 index 00000000..999678cc --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/Pubmed.html @@ -0,0 +1,139 @@ + + + + +Pubmed + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Pubmed

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.Dataset +
    mklab.JGNN.adhoc.datasets.Pubmed
    +
    +
    +
    +
    +
    public class Pubmed +extends Dataset
    +
    Downloads and constructs the Pubmed node classification Dataset.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Pubmed

        +
        public Pubmed()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Citeseer.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Citeseer.html new file mode 100644 index 00000000..6f143612 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Citeseer.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.adhoc.datasets.Citeseer + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.datasets.Citeseer

    +
    +No usage of mklab.JGNN.adhoc.datasets.Citeseer
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Cora.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Cora.html new file mode 100644 index 00000000..a4b7474f --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Cora.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.adhoc.datasets.Cora + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.datasets.Cora

    +
    +No usage of mklab.JGNN.adhoc.datasets.Cora
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Pubmed.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Pubmed.html new file mode 100644 index 00000000..59697ecd --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/class-use/Pubmed.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.adhoc.datasets.Pubmed + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.datasets.Pubmed

    +
    +No usage of mklab.JGNN.adhoc.datasets.Pubmed
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/package-summary.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/package-summary.html new file mode 100644 index 00000000..0e41f8ae --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/package-summary.html @@ -0,0 +1,107 @@ + + + + +mklab.JGNN.adhoc.datasets + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.adhoc.datasets

    +
    +
    +
    package mklab.JGNN.adhoc.datasets
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/package-tree.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/package-tree.html new file mode 100644 index 00000000..ebc49880 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/package-tree.html @@ -0,0 +1,79 @@ + + + + +mklab.JGNN.adhoc.datasets Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.adhoc.datasets

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    +
      +
    • java.lang.Object +
        +
      • mklab.JGNN.adhoc.Dataset +
          +
        • mklab.JGNN.adhoc.datasets.Citeseer
        • +
        • mklab.JGNN.adhoc.datasets.Cora
        • +
        • mklab.JGNN.adhoc.datasets.Pubmed
        • +
        +
      • +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/datasets/package-use.html b/docs/javadoc/mklab/JGNN/adhoc/datasets/package-use.html new file mode 100644 index 00000000..0a61e300 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/datasets/package-use.html @@ -0,0 +1,59 @@ + + + + +Uses of Package mklab.JGNN.adhoc.datasets + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.adhoc.datasets

    +
    +No usage of mklab.JGNN.adhoc.datasets
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/package-summary.html b/docs/javadoc/mklab/JGNN/adhoc/package-summary.html new file mode 100644 index 00000000..8e41e246 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/package-summary.html @@ -0,0 +1,109 @@ + + + + +mklab.JGNN.adhoc + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.adhoc

    +
    +
    +
    package mklab.JGNN.adhoc
    +
    +
      +
    • + +
    • +
    • +
      +
      Classes
      +
      +
      Class
      +
      Description
      + +
      +
      This class provides the backbone with which to define datasets.
      +
      + +
      +
      Converts back-and-forth between objects and unique ids.
      +
      + +
      +
      This class and subclasses can be used to create Model instances + by automatically creating and managing NNOperation instances based on + textual descriptions.
      +
      +
      +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/package-tree.html b/docs/javadoc/mklab/JGNN/adhoc/package-tree.html new file mode 100644 index 00000000..294a216e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/package-tree.html @@ -0,0 +1,75 @@ + + + + +mklab.JGNN.adhoc Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.adhoc

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/package-use.html b/docs/javadoc/mklab/JGNN/adhoc/package-use.html new file mode 100644 index 00000000..3c1498b4 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/package-use.html @@ -0,0 +1,138 @@ + + + + +Uses of Package mklab.JGNN.adhoc + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.adhoc

    +
    +
    Packages that use mklab.JGNN.adhoc
    + +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/FastBuilder.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/FastBuilder.html new file mode 100644 index 00000000..b3c7f47d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/FastBuilder.html @@ -0,0 +1,623 @@ + + + + +FastBuilder + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class FastBuilder

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.ModelBuilder +
    mklab.JGNN.adhoc.parsers.FastBuilder
    +
    +
    +
    +
    +
    public class FastBuilder +extends ModelBuilder
    +
    Extends the capabilities of LayeredBuilder to use + for node classification. It accepts the adjacency graph in the constructor, + to be used with the symbol A in operations or layer definitions, + and node features.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        FastBuilder

        +
        public FastBuilder()
        +
        Deprecated. +
        This constructor should only be used by loading.
        +
        +
        +
      • +
      • +
        +

        FastBuilder

        +
        public FastBuilder(Matrix adjacency, + Matrix features)
        +
        Creates a graph neural network builder from an + normalized adjacency matrix and a node feature matrix.
        +
        +
        Parameters:
        +
        adjacency - The pre-normalized adjacency matrix.
        +
        features - The node feature matrix.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        rememberAs

        +
        public FastBuilder rememberAs(String layerId)
        +
        Remembers the last layer's output per a given identifier so that {layerId} + within future layer(String) definitions is made to refer to the + current layer.
        +
        +
        Parameters:
        +
        layerId - An identifier to remember the last layer's output as.
        +
        Returns:
        +
        The model builder.
        +
        +
        +
      • +
      • +
        +

        layer

        +
        public FastBuilder layer(String expression)
        +
        Applies an operation(String) and increases the layer identifier count.
        +
        +
        Parameters:
        +
        expression - A parsable expression.
        +
        Returns:
        +
        this builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        classify

        +
        public FastBuilder classify()
        +
        Adds a classification layer that gather the number of inputs nodes + and applies softmax on all of them.
        +
        +
        Returns:
        +
        this builder.
        +
        +
        +
      • +
      • +
        +

        layerRepeat

        +
        public FastBuilder layerRepeat(String expression, + int times)
        +
        Repeats a layer(String) definition a number of times. + Ideal for building deep architectures.
        +
        +
        Parameters:
        +
        expression - The expression to repeat for each layer.
        +
        times - The number of times to repeat the expression.
        +
        Returns:
        +
        this builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        function

        +
        public FastBuilder function(String name, + String value)
        +
        +
        Overrides:
        +
        function in class ModelBuilder
        +
        +
        +
      • +
      • +
        +

        config

        +
        public FastBuilder config(String name, + String value)
        +
        +
        Overrides:
        +
        config in class ModelBuilder
        +
        +
        +
      • +
      • +
        +

        config

        +
        public FastBuilder config(String name, + double value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a configuration hyperparameter, which can be used to declare + matrix and vector parameters during ModelBuilder.operation(String) expressions. + For in-expression use of hyperparameters, delcare them with ModelBuilder.constant(String, double).
        +
        +
        Overrides:
        +
        config in class ModelBuilder
        +
        Parameters:
        +
        name - The name of the configuration hyperparameter.
        +
        value - The value to be assigned to the hyperparameter. + Typically, provide a long number.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        param

        +
        public FastBuilder param(String name, + Tensor value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a learnable mklab.JGNN.nn.inputs.Paramater component with the given name, + zero regularization, and initial value.
        +
        +
        Overrides:
        +
        param in class ModelBuilder
        +
        Parameters:
        +
        name - The name to be assigned to the new component.
        +
        value - The initial value to be assigned to the parameter. Exact values + can be overridden by neural initialization strategies, but an initial value + should be declared nonetheless to determine the parameter type and allocate + any necessary memory.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        constant

        +
        public FastBuilder constant(String name, + double value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a non-learnable constant component with the given name. + This can be used in computations. To edit the constant's values, + use ModelBuilder.get(String) to retrieve the respective component.
        +
        +
        Overrides:
        +
        constant in class ModelBuilder
        +
        Parameters:
        +
        name - The name of the constant component.
        +
        value - A double value to assign to the constant.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        constant

        +
        public FastBuilder constant(String name, + Tensor value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a non-learnable constant component with the given name. + This can be used in computations. To edit the constant's values, + use ModelBuilder.get(String) to retrieve the respective component.
        +
        +
        Overrides:
        +
        constant in class ModelBuilder
        +
        Parameters:
        +
        name - The name of the constant component.
        +
        value - A Tensor value to assign to the constant.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        param

        +
        public FastBuilder param(String name, + double regularization, + Tensor value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a learnable Paramater component with the given name, + learning L2 regularization, and initial value.
        +
        +
        Overrides:
        +
        param in class ModelBuilder
        +
        Parameters:
        +
        name - The name to be assigned to the new component.
        +
        regularization - The regularization value. Zero corresponds to no regularization. + Typically, this is non-negative.
        +
        value - The initial value to be assigned to the parameter. Exact values + can be overridden by neural initialization strategies, but an initial value + should be declared nonetheless to determine the parameter type and allocate + any necessary memory.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        operation

        +
        public FastBuilder operation(String desc)
        +
        Description copied from class: ModelBuilder
        +
        Parses one or more operations split by new line characters or ; + to add to the execution graph. All operations should assign a + value to a new component name and comprise operators and functions. + For a detailed description of the domain-specific language this + method accepts, please refer to the library's + + online documentation.
        +
        +
        Overrides:
        +
        operation in class ModelBuilder
        +
        Parameters:
        +
        desc - The operation to parse.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        futureConfigs

        +
        public FastBuilder futureConfigs(String config, + Function<Integer,Double> func, + int depth)
        +
        Defines a number of config(String, double) symbols involving a {l} + notation, for example so that they can be used during layerRepeat(String, int).
        +
        +
        Parameters:
        +
        config - The configuration symbols (these should involve {l}).
        +
        func - A lambda Java function to calculate the configuration's value. This takes + as input an integer (starting from 0 for the current layer) and adds one for each + subsequently declared symbol.
        +
        depth - The number of future layers expected to use the symbols.
        +
        Returns:
        +
        this builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        futureConstants

        +
        public FastBuilder futureConstants(String constantName, + Function<Integer,Double> func, + int depth)
        +
        Defines a number of constant(String, double) symbols involving a {l} + notation, for example so that they can be used during layerRepeat(String, int).
        +
        +
        Parameters:
        +
        constantName - The configuration symbols (these should involve {l}).
        +
        func - A lambda Java function to calculate the constant's value. This takes + as input an integer (starting from 0 for the current layer) and adds one for each + subsequently declared symbol.
        +
        depth - The number of future layers expected to use the constant.
        +
        Returns:
        +
        this builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        concat

        +
        public FastBuilder concat(int depth)
        +
        Concatenates horizontally the output of a number of given layers, + starting from the last one and going backwards. (For concatenation + of specific layers just use concat within normal operations.)
        +
        +
        Parameters:
        +
        depth - The number of given layers to concatenate.
        +
        Returns:
        +
        this builder.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/LayeredBuilder.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/LayeredBuilder.html new file mode 100644 index 00000000..5675607e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/LayeredBuilder.html @@ -0,0 +1,631 @@ + + + + +LayeredBuilder + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class LayeredBuilder

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.ModelBuilder +
    mklab.JGNN.adhoc.parsers.LayeredBuilder
    +
    +
    +
    +
    +
    public class LayeredBuilder +extends ModelBuilder
    +
    Extends the capabilities of the ModelBuilder + with the ability to define multilayer (e.g. deep) neural architectures. + The symbols {l} and {l+1} are replaced in all expressions + with appropriate layer identifiers (these increase by one each time a new + layer(String) is defined.
    +
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        LayeredBuilder

        +
        public LayeredBuilder()
        +
        Instantiates a layered builder with input name h0. This can be + used by future expressions involving h{l}. You can add more + architecture inputs normally with var(String).
        +
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        LayeredBuilder

        +
        public LayeredBuilder(String inputName)
        +
        Instantiates a layered builder with the given symbol as an input name. + If you plan to immediately use a layer(String) expression + that involves X{l}, where X is some symbol, + set X0 as the architecture's input. You can add more + architecture inputs normally with var(String).
        +
        +
        Parameters:
        +
        inputName - The symbol to use as the built architecture's input.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        var

        +
        public LayeredBuilder var(String inputName)
        +
        Description copied from class: ModelBuilder
        +
        Declares a component with the given name to be used as an input + of the managed model.
        +
        +
        Overrides:
        +
        var in class ModelBuilder
        +
        Parameters:
        +
        inputName - The name of the component.
        +
        Returns:
        +
        The builder's instance.
        +
        +
        +
      • +
      • +
        +

        rememberAs

        +
        public LayeredBuilder rememberAs(String layerId)
        +
        Sets the current layer identifier to a specific symbol layerId + so that future usage of {layerId} is automatically replaced with + the identifier.
        +
        +
        Parameters:
        +
        layerId - The symbol to set to the current layer identifier.
        +
        Returns:
        +
        this layer builder.
        +
        +
        +
      • +
      • +
        +

        layer

        +
        public LayeredBuilder layer(String expression)
        +
        Applies an operation(String) and increases the layer identifier count.
        +
        +
        Parameters:
        +
        expression - A parsable expression.
        +
        Returns:
        +
        this layer builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        layerRepeat

        +
        public LayeredBuilder layerRepeat(String expression, + int times)
        +
        Repeats a layer(String) definition a number of times. + Ideal for building deep architectures.
        +
        +
        Parameters:
        +
        expression - The expression to repeat for each layer.
        +
        times - The number of times to repeat the expression.
        +
        Returns:
        +
        this layer builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        config

        +
        public LayeredBuilder config(String name, + double value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a configuration hyperparameter, which can be used to declare + matrix and vector parameters during ModelBuilder.operation(String) expressions. + For in-expression use of hyperparameters, delcare them with ModelBuilder.constant(String, double).
        +
        +
        Overrides:
        +
        config in class ModelBuilder
        +
        Parameters:
        +
        name - The name of the configuration hyperparameter.
        +
        value - The value to be assigned to the hyperparameter. + Typically, provide a long number.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        param

        +
        public LayeredBuilder param(String name, + Tensor value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a learnable mklab.JGNN.nn.inputs.Paramater component with the given name, + zero regularization, and initial value.
        +
        +
        Overrides:
        +
        param in class ModelBuilder
        +
        Parameters:
        +
        name - The name to be assigned to the new component.
        +
        value - The initial value to be assigned to the parameter. Exact values + can be overridden by neural initialization strategies, but an initial value + should be declared nonetheless to determine the parameter type and allocate + any necessary memory.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        constant

        +
        public LayeredBuilder constant(String name, + double value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a non-learnable constant component with the given name. + This can be used in computations. To edit the constant's values, + use ModelBuilder.get(String) to retrieve the respective component.
        +
        +
        Overrides:
        +
        constant in class ModelBuilder
        +
        Parameters:
        +
        name - The name of the constant component.
        +
        value - A double value to assign to the constant.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        constant

        +
        public LayeredBuilder constant(String name, + Tensor value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a non-learnable constant component with the given name. + This can be used in computations. To edit the constant's values, + use ModelBuilder.get(String) to retrieve the respective component.
        +
        +
        Overrides:
        +
        constant in class ModelBuilder
        +
        Parameters:
        +
        name - The name of the constant component.
        +
        value - A Tensor value to assign to the constant.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        param

        +
        public LayeredBuilder param(String name, + double regularization, + Tensor value)
        +
        Description copied from class: ModelBuilder
        +
        Declares a learnable Paramater component with the given name, + learning L2 regularization, and initial value.
        +
        +
        Overrides:
        +
        param in class ModelBuilder
        +
        Parameters:
        +
        name - The name to be assigned to the new component.
        +
        regularization - The regularization value. Zero corresponds to no regularization. + Typically, this is non-negative.
        +
        value - The initial value to be assigned to the parameter. Exact values + can be overridden by neural initialization strategies, but an initial value + should be declared nonetheless to determine the parameter type and allocate + any necessary memory.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        operation

        +
        public LayeredBuilder operation(String desc)
        +
        Description copied from class: ModelBuilder
        +
        Parses one or more operations split by new line characters or ; + to add to the execution graph. All operations should assign a + value to a new component name and comprise operators and functions. + For a detailed description of the domain-specific language this + method accepts, please refer to the library's + + online documentation.
        +
        +
        Overrides:
        +
        operation in class ModelBuilder
        +
        Parameters:
        +
        desc - The operation to parse.
        +
        Returns:
        +
        The builder's instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        out

        +
        public LayeredBuilder out(String expression)
        +
        Description copied from class: ModelBuilder
        +
        Declares the component with the given name an output of the + managed model. The component should have already been assigned a value. + To output complex expressions use ModelBuilder.operation(String) + to define them first.
        +
        +
        Overrides:
        +
        out in class ModelBuilder
        +
        Parameters:
        +
        expression - A component name.
        +
        Returns:
        +
        The builder's instance.
        +
        +
        +
      • +
      • +
        +

        concat

        +
        public LayeredBuilder concat(int depth)
        +
        Concatenates horizontally the output of a number of given layers, + starting from the last one and going backwards. (For concatenation + of specific layers just use concat within normal operations.)
        +
        +
        Parameters:
        +
        depth - The number of given layers to concatenate.
        +
        Returns:
        +
        this layer builder.
        +
        +
        +
      • +
      • +
        +

        futureConfigs

        +
        public LayeredBuilder futureConfigs(String config, + Function<Integer,Double> func, + int depth)
        +
        Defines a number of config(String, double) symbols involving a {l} + notation, for example so that they can be used during layerRepeat(String, int).
        +
        +
        Parameters:
        +
        config - The configuration symbols (these should involve {l}).
        +
        func - A lambda Java function to calculate the configuration's value. This takes + as input an integer (starting from 0 for the current layer) and adds one for each + subsequently declared symbol.
        +
        depth - The number of future layers expected to use the symbols.
        +
        Returns:
        +
        this layer builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        futureConstants

        +
        public LayeredBuilder futureConstants(String constantName, + Function<Integer,Double> func, + int depth)
        +
        Defines a number of constant(String, double) symbols involving a {l} + notation, for example so that they can be used during layerRepeat(String, int).
        +
        +
        Parameters:
        +
        constantName - The configuration symbols (these should involve {l}).
        +
        func - A lambda Java function to calculate the constant's value. This takes + as input an integer (starting from 0 for the current layer) and adds one for each + subsequently declared symbol.
        +
        depth - The number of future layers expected to use the constant.
        +
        Returns:
        +
        this layer builder.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/Neuralang.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/Neuralang.html new file mode 100644 index 00000000..32b8e0f7 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/Neuralang.html @@ -0,0 +1,298 @@ + + + + +Neuralang + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Neuralang

    +
    +
    java.lang.Object +
    mklab.JGNN.adhoc.ModelBuilder +
    mklab.JGNN.adhoc.parsers.Neuralang
    +
    +
    +
    +
    +
    public class Neuralang +extends ModelBuilder
    +
    +
    + +
    +
    + +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/FastBuilder.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/FastBuilder.html new file mode 100644 index 00000000..1fe41530 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/FastBuilder.html @@ -0,0 +1,159 @@ + + + + +Uses of Class mklab.JGNN.adhoc.parsers.FastBuilder + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.parsers.FastBuilder

    +
    +
    Packages that use FastBuilder
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/LayeredBuilder.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/LayeredBuilder.html new file mode 100644 index 00000000..4628d123 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/LayeredBuilder.html @@ -0,0 +1,151 @@ + + + + +Uses of Class mklab.JGNN.adhoc.parsers.LayeredBuilder + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.parsers.LayeredBuilder

    +
    +
    Packages that use LayeredBuilder
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/Neuralang.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/Neuralang.html new file mode 100644 index 00000000..1bac48d0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/class-use/Neuralang.html @@ -0,0 +1,102 @@ + + + + +Uses of Class mklab.JGNN.adhoc.parsers.Neuralang + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.adhoc.parsers.Neuralang

    +
    +
    Packages that use Neuralang
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/package-summary.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/package-summary.html new file mode 100644 index 00000000..be8d702a --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/package-summary.html @@ -0,0 +1,107 @@ + + + + +mklab.JGNN.adhoc.parsers + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.adhoc.parsers

    +
    +
    +
    package mklab.JGNN.adhoc.parsers
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/package-tree.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/package-tree.html new file mode 100644 index 00000000..5e228318 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/package-tree.html @@ -0,0 +1,79 @@ + + + + +mklab.JGNN.adhoc.parsers Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.adhoc.parsers

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/adhoc/parsers/package-use.html b/docs/javadoc/mklab/JGNN/adhoc/parsers/package-use.html new file mode 100644 index 00000000..c2a3314f --- /dev/null +++ b/docs/javadoc/mklab/JGNN/adhoc/parsers/package-use.html @@ -0,0 +1,91 @@ + + + + +Uses of Package mklab.JGNN.adhoc.parsers + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.adhoc.parsers

    +
    +
    Packages that use mklab.JGNN.adhoc.parsers
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/Distribution.html b/docs/javadoc/mklab/JGNN/core/Distribution.html new file mode 100644 index 00000000..ac45784d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/Distribution.html @@ -0,0 +1,230 @@ + + + + +Distribution + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Interface Distribution

    +
    +
    +
    +
    All Known Implementing Classes:
    +
    Normal, Uniform
    +
    +
    +
    public interface Distribution
    +
    This interface abstracts a probability distribution + that can be passed to Tensor.setToRandom(Distribution) + for random tensor initialization.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      double
      + +
      +
      Retrieves the distribution's standard deviation.
      +
      +
      double
      + +
      +
      Retrieves the distribution's mean.
      +
      +
      double
      + +
      +
      Retrieves a new sample from the distribution.
      +
      + +
      setDeviation(double std)
      +
      +
      Sets the standard deviation of the distribution.
      +
      + +
      setMean(double mean)
      +
      +
      Sets the mean of the distribution.
      +
      + +
      setSeed(long seed)
      +
      +
      Sets the distribution's seed.
      +
      +
      +
      +
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setSeed

        +
        Distribution setSeed(long seed)
        +
        Sets the distribution's seed. This should yield reproducible sampling.
        +
        +
        Parameters:
        +
        seed - The distribution's new seed.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        sample

        +
        double sample()
        +
        Retrieves a new sample from the distribution.
        +
        +
        Returns:
        +
        A double value.
        +
        +
        +
      • +
      • +
        +

        setMean

        +
        Distribution setMean(double mean)
        +
        Sets the mean of the distribution.
        +
        +
        Parameters:
        +
        mean - The new mean.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        setDeviation

        +
        Distribution setDeviation(double std)
        +
        Sets the standard deviation of the distribution.
        +
        +
        Parameters:
        +
        std - The new standard deviation.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        getMean

        +
        double getMean()
        +
        Retrieves the distribution's mean.
        +
        +
        Returns:
        +
        The mean value.
        +
        +
        +
      • +
      • +
        +

        getDeviation

        +
        double getDeviation()
        +
        Retrieves the distribution's standard deviation.
        +
        +
        Returns:
        +
        The standard deviation.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/Matrix.html b/docs/javadoc/mklab/JGNN/core/Matrix.html new file mode 100644 index 00000000..fcd3e288 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/Matrix.html @@ -0,0 +1,1269 @@ + + + + +Matrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Matrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    Direct Known Subclasses:
    +
    ColumnRepetition, DenseMatrix, Diagonal, EmptyMatrix, RepeatMatrix, RowRepetition, SparseMatrix, SparseSymmetric, TransposedMatrix, VectorizedMatrix, WrapCols, WrapRows
    +
    +
    +
    public abstract class Matrix +extends Tensor
    +
    This class provides an abstract implementation of Matrix functionalities. + Matrices inherit Tensor operations, such as addition, + element-by-element multiplication, randomizing them and producing zero copies. + Additionally, matrix multiplication, transposition and access operations are + provided.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getRowName

        +
        public String getRowName()
        +
        +
      • +
      • +
        +

        getColName

        +
        public String getColName()
        +
        +
      • +
      • +
        +

        setDimensionName

        +
        public Matrix setDimensionName(String rowName, + String colName)
        +
        Sets a name for the matrix's row and column dimensions. If set, names are checked for + compatibility during matrix operations.
        +
        +
        Parameters:
        +
        rowName - The new row name or null to remove current name.
        +
        colName - The new column name or null to remove current name.
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setRowName

        +
        public Matrix setRowName(String rowName)
        +
        Sets a name for the matrix's row dimension. If set, names are checked for + compatibility during matrix operations.
        +
        +
        Parameters:
        +
        rowName - The new row name or null to remove current name.
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setColName

        +
        public Matrix setColName(String colName)
        +
        Sets a name for the matrix's column dimension. If set, names are checked for + compatibility during matrix operations.
        +
        +
        Parameters:
        +
        colName - The new column name or null to remove current name.
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public abstract Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setDimensionName

        +
        public Matrix setDimensionName(Tensor other)
        +
        Description copied from class: Tensor
        +
        Fills in dimension names per an example Tensor.isMatching(mklab.JGNN.core.Tensor) tensor. This appropriately fills in dimension + names of inherited classes too, such as matrices. Effectively, this method automatically infers + dimension names during operations.
        +
        +
        Overrides:
        +
        setDimensionName in class Tensor
        +
        Parameters:
        +
        other - The tensor from which to retrieve dimension names.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy()
        +
        Creates a Matrix with the same class and dimensions and all element set to zero.
        +
        +
        Overrides:
        +
        zeroCopy in class Tensor
        +
        Returns:
        +
        A Matrix with the same class and dimensions.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Creates a Matrix with the same class and dimensions and all element set to zero. This + checks that the copy has a total number of elements equal to the given size.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The desired size of the matrix.
        +
        Returns:
        +
        A Matrix with the same class and dimensions.
        +
        Throws:
        +
        RuntimeException - If the resulting tensor is not of the same size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(Tensor prototype)
        +
        Creates a tensor of the same class and all elements set to zero, + but size and dimension names are obtained from a prototype tensor.
        +
        +
        Overrides:
        +
        zeroCopy in class Tensor
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public abstract Matrix zeroCopy(long rows, + long cols)
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getRows

        +
        public long getRows()
        +
        Retrieves the number of rows of a matrix.
        +
        +
        Returns:
        +
        The number of rows.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getCols

        +
        public long getCols()
        +
        Retrieves the number of columns of a matrix.
        +
        +
        Returns:
        +
        The number of columns.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getDimensionSize

        +
        public long getDimensionSize(String name)
        +
        Retrieves the value of the dimension with the given name.
        +
        +
        Parameters:
        +
        name - The given name.
        +
        Returns:
        +
        Either the number of rows or the number of cols, depending on which dimension + the given name matches,
        +
        Throws:
        +
        RuntimeException - if both matrix dimensions have the same name or if the given + name is not a matrix dimension.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long row, + long col)
        +
        Retrieves the value stored at a matrix element.
        +
        +
        Parameters:
        +
        row - The element's row.
        +
        col - The element's column.
        +
        Returns:
        +
        The value corresponding to the element (row, col).
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Matrix put(long row, + long col, + double value)
        +
        Stores values at matrix elements.
        +
        +
        Parameters:
        +
        row - The element's row.
        +
        col - The element's column.
        +
        value - The value to store.
        +
        Returns:
        +
        this Matrix instance.
        +
        +
        +
      • +
      • +
        +

        transposed

        +
        public Matrix transposed()
        +
        Creates a transposed copy of the matrix. + Note: Contrary to typical tensor operations, in-place transposition is not supported. + However, related methods can help avoid explicit transposition without allocating more + memory.
        +
        +
        Returns:
        +
        A transposed copy of the matrix.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        asTransposed

        +
        public Matrix asTransposed()
        +
        Creates a transposed version of the matrix that accesses the same elements (thus, editing one + edits the other) without allocating additional memory.
        +
        +
        Returns:
        +
        A TransposedMatrix.
        +
        +
        +
      • +
      • +
        +

        transform

        +
        public Tensor transform(Tensor x)
        +
        Performs the linear algebra transformation A*x where A is this matrix and x a vector
        +
        +
        Parameters:
        +
        x - The one-dimensional tensor which is the vector being transformed.
        +
        Returns:
        +
        The one-dimensional outcome of the transformation.
        +
        +
        +
      • +
      • +
        +

        matmul

        +
        public Matrix matmul(Matrix with)
        +
        Performs the matrix multiplication of this*with and the recipient.
        +
        +
        Parameters:
        +
        with - The matrix to multiply with.
        +
        Returns:
        +
        A matrix that stores the outcome of the multiplication.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        matmul

        +
        public Matrix matmul(Matrix with, + boolean transposeSelf, + boolean transposeWith)
        +
        Can be used to perform fast computation of the matrix multiplications +
        this*with, +
        this.transposed()*with +
        this*with.transposed(), +
        this.transposed()*with.transposed() +
        while avoiding the overhead of calling + transposed(). In this first of those cases, this operation + becomes equivalent to matmul(Matrix).
        +
        +
        Parameters:
        +
        with - The matrix to multiply with.
        +
        transposeSelf - Whether this matrix should be transposed before multiplication.
        +
        transposeWith - Whether the multiplied with matrix should be transposed before multiplication.
        +
        Returns:
        +
        A matrix that stores the outcome of the multiplication.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        external

        +
        public static Matrix external(Tensor horizontal, + Tensor vertical)
        +
        Produces the external product of two tensors. + This is equivalent but faster to calling matmul(horizontal.asColum(), vertical.asRow()).
        +
        +
        Parameters:
        +
        horizontal - The first tensor.
        +
        vertical - The second tensor.
        +
        Returns:
        +
        A dense matrix holding the external product.
        +
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Description copied from class: Tensor
        +
        Describes the type, size and other characteristics of the tensor.
        +
        +
        Overrides:
        +
        describe in class Tensor
        +
        Returns:
        +
        A String description.
        +
        +
        +
      • +
      • +
        +

        onesMask

        +
        public Matrix onesMask()
        +
        Produces a mask that indicates the non-zero elements of the matrix. + Element's correspond to the matrix's whose non-zero ones are set to 1.
        +
        +
        Returns:
        +
        A matrix of the same dimensions.
        +
        +
        +
      • +
      • +
        +

        symmetricNormalization

        +
        public Matrix symmetricNormalization()
        +
        Creates a copy of the Matrix that holds its symmetrically normalized version.
        +
        +
        Returns:
        +
        A new Matrix of the same dimensions.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setMainDiagonal

        +
        public Matrix setMainDiagonal(double value)
        +
        Sets the matrix's specified main diagonal elements to a given value value.
        +
        +
        Parameters:
        +
        value - The value to set to the diagonal's elements.
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setDiagonal

        +
        public Matrix setDiagonal(long diagonal, + double value)
        +
        Sets the matrix's specified diagonal elements to a given value.
        +
        +
        Parameters:
        +
        diagonal - Which diagonal to set. 0 is the main diagonal
        +
        value - The value to set to the diagonal's elements.
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setToSymmetricNormalization

        +
        public Matrix setToSymmetricNormalization()
        +
        Sets the Matrix to its symmetrically normalized transformation + by appropriately adjusting its element values.
        +
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setToASymmetricNormalization

        +
        public Matrix setToASymmetricNormalization()
        +
        Sets the Matrix to its asymmetrically normalized transformation + by appropriately adjusting its element values.
        +
        +
        Returns:
        +
        this Matrix instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessDim

        +
        public Tensor accessDim(long index, + String name)
        +
        Retrieves either the given row or column as a trensor.
        +
        +
        Parameters:
        +
        index - The dimension index to access.
        +
        name - The dimension's name.
        +
        Returns:
        +
        Either a AccessRow or a AccessCol at the given index.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessRow

        +
        public Tensor accessRow(long row)
        +
        Retrieves the given row as a tensor. Editing the result + also edits the original matrix. + No new memory is allocated for matrix values.
        +
        +
        Parameters:
        +
        row - The given row.
        +
        Returns:
        +
        An AccessRow instance of the corresponding row.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessCol

        +
        public Tensor accessCol(long col)
        +
        Retrieves the given column as a tensor. Editing the result + also edits the original matrix. + No new memory is allocated for matrix values.
        +
        +
        Parameters:
        +
        col - The given column.
        +
        Returns:
        +
        An AccessCol of the corresponding column.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        toString

        +
        public String toString()
        +
        Description copied from class: Tensor
        +
        A string serialization of the tensor that can be used by the constructor DenseTensor(String) to create an identical copy.
        +
        +
        Overrides:
        +
        toString in class Tensor
        +
        Returns:
        +
        A serialization of the tensor.
        +
        +
        +
      • +
      • +
        +

        toNonZeroString

        +
        public String toNonZeroString()
        +
        +
      • +
      • +
        +

        accessRows

        +
        public List<Tensor> accessRows()
        +
        Organizes matrix rows to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Returns:
        +
        A list of AccessRow instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessColumns

        +
        public List<Tensor> accessColumns()
        +
        Organizes specific matrix columns to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Returns:
        +
        A list of AccessCol instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessRows

        +
        public Matrix accessRows(long... rows)
        +
        Organizes specific matrix rows to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Parameters:
        +
        rows - An array of rows to access.
        +
        Returns:
        +
        A list of AccessRow instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessColumns

        +
        public Matrix accessColumns(long... cols)
        +
        Organizes specific matrix columns to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Parameters:
        +
        cols - An array of columns to access.
        +
        Returns:
        +
        A list of AccessCol instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessRows

        +
        public Matrix accessRows(Tensor rows)
        +
        Organizes specific matrix rows to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Parameters:
        +
        rows - A tensor whose values hold the rows to access.
        +
        Returns:
        +
        A list of AccessRow instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessColumns

        +
        public Matrix accessColumns(Tensor cols)
        +
        Organizes matrix columns to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Parameters:
        +
        cols - A tensor whose values hold the columns to access.
        +
        Returns:
        +
        A list of AccessCol instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessRows

        +
        public List<Tensor> accessRows(Iterable<Long> rowIds)
        +
        Organizes some matrix rows to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Parameters:
        +
        rowIds - The rows to access.
        +
        Returns:
        +
        A list of AccessRow instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessColumns

        +
        public List<Tensor> accessColumns(Iterable<Long> colIds)
        +
        Organizes some matrix columns to a list of tensors that share entries. + This operation does not allocate memory for matrix elements and editing + tensor elements edits the original matrix's elements.
        +
        +
        Parameters:
        +
        colIds - The columns to access.
        +
        Returns:
        +
        A list of AccessCol instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        fromDouble

        +
        public static Matrix fromDouble(double value)
        +
        Converts a given value to a JGNN-compatible 1x1 matrix.
        +
        +
        Parameters:
        +
        value - A given value.
        +
        Returns:
        +
        a Matrix holding the given value
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        toDense

        +
        public Matrix toDense()
        +
        Creates a copy of the matrix organized as a dense matrix.
        +
        +
        Returns:
        +
        A DenseMatrix instance.
        +
        +
        +
      • +
      • +
        +

        toSparse

        +
        public Matrix toSparse()
        +
        Creates a copy of the matrix organized as a sparse matrix.
        +
        +
        Returns:
        +
        A SparseMatrix instance.
        +
        +
        +
      • +
      • +
        +

        eye

        +
        public static SparseMatrix eye(long dims)
        +
        Creates a sparse unit matrix.
        +
        +
        Parameters:
        +
        dims - The dimensions of the unit matrix.
        +
        Returns:
        +
        A sparse matrix.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/Memory.Scope.html b/docs/javadoc/mklab/JGNN/core/Memory.Scope.html new file mode 100644 index 00000000..2b8ac064 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/Memory.Scope.html @@ -0,0 +1,164 @@ + + + + +Memory.Scope + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Memory.Scope

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Memory.Scope
    +
    +
    +
    +
    Enclosing class:
    +
    Memory
    +
    +
    +
    public static class Memory.Scope +extends Object
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        enter

        +
        public void enter()
        +
        +
      • +
      • +
        +

        exit

        +
        public void exit()
        +
        +
      • +
      • +
        +

        register

        +
        public void register(double[] value)
        +
        +
      • +
      • +
        +

        unregister

        +
        public void unregister(double[] value)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/Memory.html b/docs/javadoc/mklab/JGNN/core/Memory.html new file mode 100644 index 00000000..76526f9c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/Memory.html @@ -0,0 +1,202 @@ + + + + +Memory + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Memory

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Memory
    +
    +
    +
    +
    public class Memory +extends Object
    +
    A memory management system for thread-safe allocation and release of arrays of doubles. + Soft references to allocated arrays kept so that released ones can be reused by future + allocation calls without explicitly initializing memory.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Nested Class Summary

      +
      Nested Classes
      +
      +
      Modifier and Type
      +
      Class
      +
      Description
      +
      static class 
      + +
       
      +
      +
      +
    • + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
       
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      static double[]
      +
      allocate(int length, + Object boundTo)
      +
       
      +
      static void
      +
      release(double[] value)
      +
       
      + + +
       
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Memory

        +
        public Memory()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        scope

        +
        public static Memory.Scope scope()
        +
        +
      • +
      • +
        +

        allocate

        +
        public static double[] allocate(int length, + Object boundTo)
        +
        +
      • +
      • +
        +

        release

        +
        public static void release(double[] value)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/Slice.html b/docs/javadoc/mklab/JGNN/core/Slice.html new file mode 100644 index 00000000..e6cd6b28 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/Slice.html @@ -0,0 +1,352 @@ + + + + +Slice + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Slice

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Slice
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class Slice +extends Object +implements Iterable<Long>
    +
    This class provices an interface with which to define data slices, + for instance to sample labels.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      +
      Slice(Iterable<Long> collection)
      +
      +
      Instantiates the data slice from a collection of element identifiers.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      + + +
      +
      Creates a dense tensor holding the slice's identifiers.
      +
      + + +
       
      + +
      range(double from, + double end)
      +
      +
      Performs the range(int, int) operation + while replacing values of from and end + with (int)(from*size()) and (int)(end*size()) + so that fractional ranges can be obtained.
      +
      + +
      range(int from, + int end)
      +
      +
      Obtains the identifiers in a given range of the (shuffled) slice.
      +
      + + +
      +
      Constructs a column matrix holding identifiers in + the range 0,1,..size()-1 so that the pattern + slice.samplesAsFeatures().accessRows(slice.range(from, end)) + retrieves one-element tensors holding + slice[from], slice[from+1], ...
      +
      + + +
      +
      Shuffles the slice.
      +
      + +
      shuffle(int seed)
      +
      +
      Shuffles the slice with a provided randomization seed.
      +
      +
      int
      + +
      +
      Retrieves the size of the slice.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +

      Methods inherited from interface java.lang.Iterable

      +forEach, spliterator
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Slice

        +
        public Slice(Iterable<Long> collection)
        +
        Instantiates the data slice from a collection of element identifiers.
        +
        +
        Parameters:
        +
        collection - An iterable of longs.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        shuffle

        +
        public Slice shuffle()
        +
        Shuffles the slice.
        +
        +
        Returns:
        +
        this slice.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        shuffle

        +
        public Slice shuffle(int seed)
        +
        Shuffles the slice with a provided randomization seed.
        +
        +
        Parameters:
        +
        seed - The seed to shuffle with.
        +
        Returns:
        +
        this slice.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        range

        +
        public Slice range(int from, + int end)
        +
        Obtains the identifiers in a given range of the (shuffled) slice.
        +
        +
        Parameters:
        +
        from - The beginning of the identifiers' position in the slice.
        +
        end - The end (non-inclusive) of the identifiers' position in the slice.
        +
        Returns:
        +
        A new Slice instance holding the position identifiers in this one's given range.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        samplesAsFeatures

        +
        public Matrix samplesAsFeatures()
        +
        Constructs a column matrix holding identifiers in + the range 0,1,..size()-1 so that the pattern + slice.samplesAsFeatures().accessRows(slice.range(from, end)) + retrieves one-element tensors holding + slice[from], slice[from+1], ... slice[end]. + The constructed matrix is typically used as node identifier data. + + This is different than asTensor().
        +
        +
        Returns:
        +
        A Matrix.
        +
        +
        +
      • +
      • +
        +

        range

        +
        public Slice range(double from, + double end)
        +
        Performs the range(int, int) operation + while replacing values of from and end + with (int)(from*size()) and (int)(end*size()) + so that fractional ranges can be obtained. For example, + you can call slice.shuffle().range(0.5, 1) to obtain a + random subset of the slice's identifiers.
        +
        +
        Parameters:
        +
        from - An integer at least 1 or a double in the range [0,1).
        +
        end - An integer greater than 1 or a double in the range [0,1].
        +
        Returns:
        +
        A new Slice instance holding the position identifiers in this one's given range.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        size

        +
        public int size()
        +
        Retrieves the size of the slice.
        +
        +
        Returns:
        +
        An integer.
        +
        +
        +
      • +
      • +
        +

        iterator

        +
        public Iterator<Long> iterator()
        +
        +
        Specified by:
        +
        iterator in interface Iterable<Long>
        +
        +
        +
      • +
      • +
        +

        asTensor

        +
        public Tensor asTensor()
        +
        Creates a dense tensor holding the slice's identifiers.
        +
        +
        Returns:
        +
        A DenseTensor.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/Tensor.html b/docs/javadoc/mklab/JGNN/core/Tensor.html new file mode 100644 index 00000000..64292c7b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/Tensor.html @@ -0,0 +1,1656 @@ + + + + +Tensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Tensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    Direct Known Subclasses:
    +
    AccessCol, AccessRow, AccessSubtensor, DenseTensor, EmptyTensor, Matrix, RepeatTensor, SparseTensor, VectorizedTensor
    +
    +
    +
    public abstract class Tensor +extends Object +implements Iterable<Long>
    +
    This class provides a native java implementation of Tensor functionalities.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Field Summary

      +
      Fields
      +
      +
      Modifier and Type
      +
      Field
      +
      Description
      +
      static boolean
      + +
       
      +
      +
      +
    • + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      +
      Tensor(long size)
      +
      +
      Construct that creates a tensor of zeros given its number of elements
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      + +
      abs()
      +
      +
      Computes the absolute value of tensor elements.
      +
      + +
      accessSubtensor(long from)
      +
      +
      Wraps a range of elements within a tensor + without allocating memory anew.
      +
      + +
      accessSubtensor(long from, + long to)
      +
      +
      Wraps a range of elements within a tensor + without allocating memory anew.
      +
      + +
      add(double value)
      +
       
      + +
      add(Tensor tensor)
      +
       
      +
      long
      + +
      +
      Computes the position of the maximum tensor element.
      +
      +
      long
      + +
      +
      Computes the position of the minimum tensor element.
      +
      + + +
      +
      Accesses the tensor through a single-column matrix with the tensor as the only row.
      +
      + + +
      +
      Accesses the tensor through a single-row matrix with the tensor as the only column.
      +
      +
      void
      + +
      +
      Asserts that the tensor holds only finite values.
      +
      + + +
      +
      Asserts that the tensor's dimensions match with another tensor.
      +
      +
      void
      +
      assertSize(long size)
      +
      +
      Asserts that the tensor's size() matches the given size.
      +
      + +
      assign(Tensor tensor)
      +
      +
      Performs a sparse assignment.
      +
      +
      <Type> Type
      +
      cast(Class<Type> type)
      +
      +
      Performs the equivalent of Java's typecasting that fits + in functional interfaces.
      +
      + + +
      +
      Creates a zeroCopy() and transfers to it all potentially non-zero element values.
      +
      +
      double
      + +
      +
      Provides the memory allocation density of getNonZeroElements() + compare to the size of the tensor.
      +
      + + +
      +
      Describes the type, size and other characteristics of the tensor.
      +
      +
      double
      +
      dot(Tensor tensor)
      +
      +
      Performs the dot product between this and another tensor.
      +
      +
      double
      +
      dot(Tensor tensor1, + Tensor tensor2)
      +
      +
      Performs the triple dot product between this and two other tensors.
      +
      +
      long
      + +
      +
      Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors.
      +
      + + +
      +
      Computes the exponential minus 1 of tensor elements.
      +
      +
      static Tensor
      +
      fromDouble(double value)
      +
      +
      Converts a given value to a JGNN-compatible dense tensor.
      +
      +
      static Tensor
      +
      fromRange(long end)
      +
      +
      Creates a dense tensor holding the desired range [0, 1, ..., end-1].
      +
      +
      static Tensor
      +
      fromRange(long start, + long end)
      +
      +
      Creates a dense tensor holding the desired range [start, start+1, ..., end-1].
      +
      +
      abstract double
      +
      get(long pos)
      +
      +
      Retrieves the value of a tensor element at a given position.
      +
      + + +
       
      + + +
      +
      Retrieves an iterable that wraps traverseNonZeroElements().
      +
      + + +
       
      + + +
       
      + +
      log()
      +
      +
      Computes the logarithm of tensor elements.
      +
      +
      double
      +
      max()
      +
      +
      Computes the maximum tensor element.
      +
      +
      double
      +
      min()
      +
      +
      Computes the minimum tensor element.
      +
      + +
      multiply(double value)
      +
       
      + +
      multiply(Tensor tensor)
      +
       
      + + +
      +
      Computes the negative of tensor elements.
      +
      +
      double
      + +
       
      + + +
       
      +
      abstract void
      + +
      +
      Deprecated. +
      This method may not be present in future versions + of the library, depending on whether memory reuse proves useful or nor.
      +
      +
      +
      abstract Tensor
      +
      put(long pos, + double value)
      +
      +
      Assign a value to a tensor element.
      +
      + +
      putAdd(long pos, + double value)
      +
      +
      Add a value to a tensor element.
      +
      +
      abstract void
      + +
      +
      Deprecated. +
      This method may not be present in future versions + of the library, depending on whether memory reuse proves useful or nor.
      +
      +
      + + +
      +
      Performs in-memory set of each element to its absolute value.
      +
      + +
      selfAdd(double value)
      +
      +
      Performs in-memory addition to the Tensor, storing the result in itself.
      +
      + +
      selfAdd(Tensor tensor)
      +
      +
      Performs in-memory addition to the Tensor, storing the result in itself.
      +
      + +
      selfAdd(Tensor tensor, + double weight)
      +
      +
      Performs in-memory weighted addition to the Tensor, storing the result in itself.
      +
      + + +
      +
      Sets the exponential minus 1 of tensor elements.
      +
      + + +
      +
      Performs in-memory the inverse of each non-zero element.
      +
      + + +
      +
      Performs in-memory set of each element to the logarithm of its absolute value.
      +
      + +
      selfMultiply(double value)
      +
      +
      Performs in-memory multiplication on the Tensor, storing the result to itself.
      +
      + + +
      +
      Performs in-memory multiplication on the Tensor, storing the result in itself .
      +
      + + +
      +
      Performs in-memory set of each element to the negative of itself.
      +
      + + +
      +
      Performs in-memory set of each element to the square root of its absolute value.
      +
      + + +
      +
      Performs in-memory subtraction from the Tensor, storing the result in itself.
      +
      + +
      setDimensionName(String dimensionName)
      +
      +
      Sets a name for the tensor's one dimension.
      +
      + + +
      +
      Fills in dimension names per an example isMatching(mklab.JGNN.core.Tensor) tensor.
      +
      + + +
      +
      L2-normalizes the tensor's elements.
      +
      + + +
      +
      Set all tensor element values to 1.
      +
      + + +
      +
      Divides the tensor's elements with their sum.
      +
      + + +
      +
      Set tensor elements to random values from the uniform range [0,1]
      +
      + +
      setToRandom(Distribution distribution)
      +
      +
      Set tensor elements to random values by sampling them from a given Distribution + instance.
      +
      + + +
      +
      Set all tensor element values to 1/size()
      +
      + + +
      +
      Set all tensor element values to 0.
      +
      +
      long
      + +
       
      + + +
      +
      Computes the square root of tensor elements.
      +
      + +
      subtract(Tensor tensor)
      +
       
      +
      double
      +
      sum()
      +
       
      +
      double[]
      + +
      +
      Retrieves a representation of the Tensor as an array of doubles.
      +
      +
      double
      + +
      +
      Converts a tensor of size()==1 to double.
      +
      + + +
       
      + + +
      +
      A string serialization of the tensor that can be used by the constructor DenseTensor(String) to create an identical copy.
      +
      +
      abstract Iterator<Long>
      + +
      +
      Retrieves positions within the tensor that may hold non-zero elements.
      +
      + + +
      +
      Creates a tensor of the same class with the same size and all element set to zero.
      +
      +
      abstract Tensor
      +
      zeroCopy(long size)
      +
      +
      Creates a tensor of the same class with a given size and all element set to zero.
      +
      + +
      zeroCopy(Tensor prototype)
      +
      +
      Creates a tensor of the same class and all elements set to zero, + but size and dimension names are obtained from a prototype tensor.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
      +
      +

      Methods inherited from interface java.lang.Iterable

      +forEach, spliterator
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Field Details

      +
        +
      • +
        +

        vectorization

        +
        public static boolean vectorization
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Tensor

        +
        public Tensor(long size)
        +
        Construct that creates a tensor of zeros given its number of elements
        +
        +
        Parameters:
        +
        size - The number of tensor elements
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setDimensionName

        +
        public Tensor setDimensionName(String dimensionName)
        +
        Sets a name for the tensor's one dimension. If set, names are checked for + compatibility during operations, so that tensors laying across different dimensions + do not match. Removed dimension names are matched to anything.
        +
        +
        Parameters:
        +
        dimensionName - The new row name or null to remove current name.
        +
        Returns:
        +
        this Tensor instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getDimensionName

        +
        public String getDimensionName()
        +
        +
      • +
      • +
        +

        setToRandom

        +
        public Tensor setToRandom()
        +
        Set tensor elements to random values from the uniform range [0,1]
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        setToRandom

        +
        public Tensor setToRandom(Distribution distribution)
        +
        Set tensor elements to random values by sampling them from a given Distribution + instance.
        +
        +
        Parameters:
        +
        distribution - The distribution instance to sample from.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        assertFinite

        +
        public void assertFinite()
        +
        Asserts that the tensor holds only finite values. Helps catch errors + early on and avoid misidentifying models as high quality by comparing + desired outcomes with NaN when in reality they pass through infinity and hence + don't converge.
        +
        +
        Throws:
        +
        RuntimeException - if one or more tensor elements are NaN or Inf.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public abstract void release()
        +
        Deprecated. +
        This method may not be present in future versions + of the library, depending on whether memory reuse proves useful or nor.
        +
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public abstract void persist()
        +
        Deprecated. +
        This method may not be present in future versions + of the library, depending on whether memory reuse proves useful or nor.
        +
        +
        If supported by the subclassed tensor, invalidates calls to + release() so that memory is a de-allocated only when + object references expire.
        +
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public abstract Tensor put(long pos, + double value)
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        Throws:
        +
        RuntimeException - If the value is NaN or the element position is less than 0 or greater than size()-1.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public abstract double get(long pos)
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        Throws:
        +
        RuntimeException - If the element position is less than 0 or greater than size()-1.
        +
        +
        +
      • +
      • +
        +

        putAdd

        +
        public Tensor putAdd(long pos, + double value)
        +
        Add a value to a tensor element.
        +
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        size

        +
        public long size()
        +
        +
        Returns:
        +
        The number of tensor elements
        +
        +
        +
      • +
      • +
        +

        assertSize

        +
        public void assertSize(long size)
        +
        Asserts that the tensor's size() matches the given size.
        +
        +
        Parameters:
        +
        size - The size the tensor should match
        +
        Throws:
        +
        RuntimeException - if the tensor does not match the given size
        +
        +
        +
      • +
      • +
        +

        assertMatching

        +
        public Tensor assertMatching(Tensor other)
        +
        Asserts that the tensor's dimensions match with another tensor. This check can be made + more complex by derived classes, but for a base Tensor instance it is equivalent assertSize(long). + This method calls isMatching(Tensor) to compare the tensors and throws an exception + if it returns false.
        +
        +
        Parameters:
        +
        other - The other tensor to compare with.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy()
        +
        Creates a tensor of the same class with the same size and all element set to zero.
        +
        +
        Returns:
        +
        A tensor with the same size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(Tensor prototype)
        +
        Creates a tensor of the same class and all elements set to zero, + but size and dimension names are obtained from a prototype tensor.
        +
        +
      • +
      • +
        +

        setDimensionName

        +
        public Tensor setDimensionName(Tensor other)
        +
        Fills in dimension names per an example isMatching(mklab.JGNN.core.Tensor) tensor. This appropriately fills in dimension + names of inherited classes too, such as matrices. Effectively, this method automatically infers + dimension names during operations.
        +
        +
        Parameters:
        +
        other - The tensor from which to retrieve dimension names.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public abstract Tensor zeroCopy(long size)
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        iterator

        +
        public Iterator<Long> iterator()
        +
        +
        Specified by:
        +
        iterator in interface Iterable<Long>
        +
        +
        +
      • +
      • +
        +

        getNonZeroElements

        +
        public Iterable<Long> getNonZeroElements()
        +
        Retrieves an iterable that wraps traverseNonZeroElements(). + For the time being, this is returned by implementing Iterable, + but this only serves the practical purpose of avoiding to instantiate + a new object in case many tensors are used.
        +
        +
        Returns:
        +
        An iterable of tensor positions.
        +
        +
        +
      • +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        density

        +
        public double density()
        +
        Provides the memory allocation density of getNonZeroElements() + compare to the size of the tensor. 1 indicates fully dense tensors, + and lower values sparser data.
        +
        +
        Returns:
        +
        A double in the range [0,1].
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public abstract Iterator<Long> traverseNonZeroElements()
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        copy

        +
        public Tensor copy()
        +
        Creates a zeroCopy() and transfers to it all potentially non-zero element values.
        +
        +
        Returns:
        +
        a copy of the Tensor with the same size and contents
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        assign

        +
        public Tensor assign(Tensor tensor)
        +
        Performs a sparse assignment.
        +
        +
        Parameters:
        +
        tensor - The tensor whose elements to copy (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        add

        +
        public Tensor add(Tensor tensor)
        +
        +
        Parameters:
        +
        tensor - The tensor to add with
        +
        Returns:
        +
        a new Tensor that stores the outcome of addition
        +
        +
        +
      • +
      • +
        +

        add

        +
        public Tensor add(double value)
        +
        +
        Parameters:
        +
        value - The value to add to each element
        +
        Returns:
        +
        a new Tensor that stores the outcome of addition
        +
        +
        +
      • +
      • +
        +

        selfAdd

        +
        public Tensor selfAdd(Tensor tensor)
        +
        Performs in-memory addition to the Tensor, storing the result in itself.
        +
        +
        Parameters:
        +
        tensor - The tensor to add (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        selfAdd

        +
        public Tensor selfAdd(Tensor tensor, + double weight)
        +
        Performs in-memory weighted addition to the Tensor, storing the result in itself.
        +
        +
        Parameters:
        +
        tensor - The tensor to add (it's not affected).
        +
        weight - The weight to multiply the added tensor's elements with during addition.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        selfAdd

        +
        public Tensor selfAdd(double value)
        +
        Performs in-memory addition to the Tensor, storing the result in itself.
        +
        +
        Parameters:
        +
        value - The value to add to each tensor element.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        subtract

        +
        public Tensor subtract(Tensor tensor)
        +
        +
        Parameters:
        +
        tensor - The tensor to subtract
        +
        Returns:
        +
        a new Tensor that stores the outcome of subtraction
        +
        +
        +
      • +
      • +
        +

        selfSubtract

        +
        public Tensor selfSubtract(Tensor tensor)
        +
        Performs in-memory subtraction from the Tensor, storing the result in itself.
        +
        +
        Parameters:
        +
        tensor - The tensor to subtract (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        multiply

        +
        public Tensor multiply(Tensor tensor)
        +
        +
        Parameters:
        +
        tensor - The tensor to perform element-wise multiplication with.
        +
        Returns:
        +
        A new Tensor that stores the outcome of the multiplication.
        +
        +
        +
      • +
      • +
        +

        selfMultiply

        +
        public Tensor selfMultiply(Tensor tensor)
        +
        Performs in-memory multiplication on the Tensor, storing the result in itself .
        +
        +
        Parameters:
        +
        tensor - The tensor to perform element-wise multiplication with (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        multiply

        +
        public Tensor multiply(double value)
        +
        +
        Parameters:
        +
        value - A number to multiply all tensor elements with.
        +
        Returns:
        +
        A new Tensor that stores the outcome of the multiplication.
        +
        +
        +
      • +
      • +
        +

        selfMultiply

        +
        public Tensor selfMultiply(double value)
        +
        Performs in-memory multiplication on the Tensor, storing the result to itself.
        +
        +
        Parameters:
        +
        value - A number to multiply all tensor elements with.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        sqrt

        +
        public Tensor sqrt()
        +
        Computes the square root of tensor elements.
        +
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the absolute square root of each element.
        +
        +
        +
      • +
      • +
        +

        selfSqrt

        +
        public Tensor selfSqrt()
        +
        Performs in-memory set of each element to the square root of its absolute value.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        expMinusOne

        +
        public Tensor expMinusOne()
        +
        Computes the exponential minus 1 of tensor elements.
        +
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the operation on each element.
        +
        +
        +
      • +
      • +
        +

        selfExpMinusOne

        +
        public Tensor selfExpMinusOne()
        +
        Sets the exponential minus 1 of tensor elements.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        log

        +
        public Tensor log()
        +
        Computes the logarithm of tensor elements.
        +
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the logarithm of the absolute of each element.
        +
        +
        +
      • +
      • +
        +

        selfLog

        +
        public Tensor selfLog()
        +
        Performs in-memory set of each element to the logarithm of its absolute value.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        negative

        +
        public Tensor negative()
        +
        Computes the negative of tensor elements.
        +
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the negative of each element.
        +
        +
        +
      • +
      • +
        +

        selfNegative

        +
        public Tensor selfNegative()
        +
        Performs in-memory set of each element to the negative of itself.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        abs

        +
        public Tensor abs()
        +
        Computes the absolute value of tensor elements.
        +
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the absolute value of each element.
        +
        +
        +
      • +
      • +
        +

        selfAbs

        +
        public Tensor selfAbs()
        +
        Performs in-memory set of each element to its absolute value.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        inverse

        +
        public Tensor inverse()
        +
        +
        Returns:
        +
        A new Tensor with inversed each non-zero element.
        +
        +
        +
      • +
      • +
        +

        selfInverse

        +
        public Tensor selfInverse()
        +
        Performs in-memory the inverse of each non-zero element.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        dot

        +
        public double dot(Tensor tensor)
        +
        Performs the dot product between this and another tensor.
        +
        +
        Parameters:
        +
        tensor - The tensor with which to find the product.
        +
        Returns:
        +
        The dot product between the tensors.
        +
        +
        +
      • +
      • +
        +

        dot

        +
        public double dot(Tensor tensor1, + Tensor tensor2)
        +
        Performs the triple dot product between this and two other tensors.
        +
        +
        Parameters:
        +
        tensor1 - The firth other tensor with which to find the product.
        +
        tensor2 - The second other tensor with which to find the product.
        +
        Returns:
        +
        The triple dot product between the tensors.
        +
        +
        +
      • +
      • +
        +

        norm

        +
        public double norm()
        +
        +
        Returns:
        +
        The L2 norm of the tensor
        +
        +
        +
      • +
      • +
        +

        sum

        +
        public double sum()
        +
        +
        Returns:
        +
        The sum of tensor elements
        +
        +
        +
      • +
      • +
        +

        accessSubtensor

        +
        public Tensor accessSubtensor(long from)
        +
        Wraps a range of elements within a tensor + without allocating memory anew. Editing the returned + tensor also affects the original one and conversely. + The elements are accessed so that the starting position + is accessed at position 0 of the starting tensor.
        +
        +
        Parameters:
        +
        from - The starting position of the subtensor till its end.
        +
        Returns:
        +
        An AccessSubtensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessSubtensor

        +
        public Tensor accessSubtensor(long from, + long to)
        +
        Wraps a range of elements within a tensor + without allocating memory anew. Editing the returned + tensor also affects the original one and conversely. + The elements are accessed so that the starting position + is accessed at position 0 of the starting tensor. Accessing + stops up to but not including the end poisition, + so that accessSubtensor(0, size()) is + a see-through copy of the original tensor.
        +
        +
        Parameters:
        +
        from - The starting position of the subtensor.
        +
        to - The end position of the subtensor that is not included.
        +
        Returns:
        +
        An AccessSubtensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        max

        +
        public double max()
        +
        Computes the maximum tensor element. If the tensor has zero size(), + this returns Double.NEGATIVE_INFINITY.
        +
        +
        Returns:
        +
        The maximum tensor element
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        argmax

        +
        public long argmax()
        +
        Computes the position of the maximum tensor element. If the tensor has zero size(), + this returns -1.
        +
        +
        Returns:
        +
        The position of the maximum tensor element
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        min

        +
        public double min()
        +
        Computes the minimum tensor element. If the tensor has zero size(), + this returns Double.POSITIVE_INFINITY.
        +
        +
        Returns:
        +
        The minimum tensor element
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        argmin

        +
        public long argmin()
        +
        Computes the position of the minimum tensor element. If the tensor has zero size(), + this returns -1.
        +
        +
        Returns:
        +
        The position of the minimum tensor element
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        toString

        +
        public String toString()
        +
        A string serialization of the tensor that can be used by the constructor DenseTensor(String) to create an identical copy.
        +
        +
        Overrides:
        +
        toString in class Object
        +
        Returns:
        +
        A serialization of the tensor.
        +
        +
        +
      • +
      • +
        +

        normalized

        +
        public Tensor normalized()
        +
        +
        Returns:
        +
        A copy of the tensor on which L2 normalization has been performed.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        toProbability

        +
        public Tensor toProbability()
        +
        +
        Returns:
        +
        A copy of the tensor on which division with the sum has been performed + (if the tensor contains no negative elements, this is equivalent to L1 normalization)
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setToNormalized

        +
        public Tensor setToNormalized()
        +
        L2-normalizes the tensor's elements. Does nothing if the norm() is zero.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setToProbability

        +
        public Tensor setToProbability()
        +
        Divides the tensor's elements with their sum. Does nothing if the sum() is zero.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setToUniform

        +
        public Tensor setToUniform()
        +
        Set all tensor element values to 1/size()
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        setToOnes

        +
        public Tensor setToOnes()
        +
        Set all tensor element values to 1.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        setToZero

        +
        public Tensor setToZero()
        +
        Set all tensor element values to 0.
        +
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        toArray

        +
        public double[] toArray()
        +
        Retrieves a representation of the Tensor as an array of doubles.
        +
        +
        Returns:
        +
        An array of doubles
        +
        +
        +
      • +
      • +
        +

        fromDouble

        +
        public static Tensor fromDouble(double value)
        +
        Converts a given value to a JGNN-compatible dense tensor.
        +
        +
        Parameters:
        +
        value - A given value.
        +
        Returns:
        +
        a Tensor holding the given value
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        fromRange

        +
        public static Tensor fromRange(long start, + long end)
        +
        Creates a dense tensor holding the desired range [start, start+1, ..., end-1]. + This allocates a new tensor.
        +
        +
        Parameters:
        +
        start - The start of the range.
        +
        end - The end of the range.
        +
        Returns:
        +
        A DenseTensor with size end-start
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        fromRange

        +
        public static Tensor fromRange(long end)
        +
        Creates a dense tensor holding the desired range [0, 1, ..., end-1]. + This allocates a new tensor.
        +
        +
        Parameters:
        +
        end - The end of the range.
        +
        Returns:
        +
        A DenseTensor with size end-start
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        toDouble

        +
        public double toDouble()
        +
        Converts a tensor of size()==1 to double. Throws an exception otherwise.
        +
        +
        Returns:
        +
        A double.
        +
        Throws:
        +
        RuntimeException - If the tensor is not of size 1.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        asColumn

        +
        public WrapCols asColumn()
        +
        Accesses the tensor through a single-column matrix with the tensor as the only row. + Editing the returned matrix also edits the original tensor. + No new memory is allocated for tensor values.
        +
        +
        Returns:
        +
        A WrapCols instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        asRow

        +
        public WrapRows asRow()
        +
        Accesses the tensor through a single-row matrix with the tensor as the only column. + Editing the returned matrix also edits the original tensor. + No new memory is allocated for tensor values.
        +
        +
        Returns:
        +
        A WrapRows instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Describes the type, size and other characteristics of the tensor.
        +
        +
        Returns:
        +
        A String description.
        +
        +
        +
      • +
      • +
        +

        cast

        +
        public <Type> Type cast(Class<Type> type)
        +
        Performs the equivalent of Java's typecasting that fits + in functional interfaces.
        +
        +
        Type Parameters:
        +
        Type - The automatically inferred type of the class.
        +
        Parameters:
        +
        type - The class to cast to.
        +
        Returns:
        +
        this Tensor instance typecast to the given type.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/ThreadPool.html b/docs/javadoc/mklab/JGNN/core/ThreadPool.html new file mode 100644 index 00000000..6a574b06 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/ThreadPool.html @@ -0,0 +1,208 @@ + + + + +ThreadPool + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class ThreadPool

    +
    +
    java.lang.Object +
    mklab.JGNN.core.ThreadPool
    +
    +
    +
    +
    public class ThreadPool +extends Object
    +
    This class provides thread execution pool utilities while keeping track of thread + identifiers for use by thread-specific NNOperation. + Threads scheduling relies on Java's ThreadPoolExecutor.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      static Integer
      + +
      +
      Retrieves a unique integer indicating the currently running thread.
      +
      +
      static ThreadPool
      + +
      +
      Retrieves the singleton ThreadPool instance used by JGNN.
      +
      +
      void
      +
      submit(Runnable runnable)
      +
      +
      Submits a runnable to be executed at some future point by a thread, + for example via ThreadPool.getInstance().submit(new Runnable(){public void run(){...}});.
      +
      +
      void
      + +
      +
      Waits until all threads in the pool have finished.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getInstance

        +
        public static ThreadPool getInstance()
        +
        Retrieves the singleton ThreadPool instance used by JGNN.
        +
        +
        Returns:
        +
        A ThreadPool.
        +
        +
        +
      • +
      • +
        +

        submit

        +
        public void submit(Runnable runnable)
        +
        Submits a runnable to be executed at some future point by a thread, + for example via ThreadPool.getInstance().submit(new Runnable(){public void run(){...}});.
        +
        +
        Parameters:
        +
        runnable - A Java Runnable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getCurrentThreadId

        +
        public static Integer getCurrentThreadId()
        +
        Retrieves a unique integer indicating the currently running thread.
        +
        +
        Returns:
        +
        An integer id.
        +
        +
        +
      • +
      • +
        +

        waitForConclusion

        +
        public void waitForConclusion()
        +
        Waits until all threads in the pool have finished. This concludes only + if all submitted runnable conclude.
        +
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/Distribution.html b/docs/javadoc/mklab/JGNN/core/class-use/Distribution.html new file mode 100644 index 00000000..d9eb2028 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/Distribution.html @@ -0,0 +1,131 @@ + + + + +Uses of Interface mklab.JGNN.core.Distribution + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Interface
    mklab.JGNN.core.Distribution

    +
    +
    Packages that use Distribution
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/Matrix.html b/docs/javadoc/mklab/JGNN/core/class-use/Matrix.html new file mode 100644 index 00000000..5ab2171b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/Matrix.html @@ -0,0 +1,557 @@ + + + + +Uses of Class mklab.JGNN.core.Matrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.Matrix

    +
    +
    Packages that use Matrix
    + +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/Memory.Scope.html b/docs/javadoc/mklab/JGNN/core/class-use/Memory.Scope.html new file mode 100644 index 00000000..a1e7875d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/Memory.Scope.html @@ -0,0 +1,84 @@ + + + + +Uses of Class mklab.JGNN.core.Memory.Scope + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.Memory.Scope

    +
    +
    Packages that use Memory.Scope
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/Memory.html b/docs/javadoc/mklab/JGNN/core/class-use/Memory.html new file mode 100644 index 00000000..015e7767 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/Memory.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.Memory + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.Memory

    +
    +No usage of mklab.JGNN.core.Memory
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/Slice.html b/docs/javadoc/mklab/JGNN/core/class-use/Slice.html new file mode 100644 index 00000000..b64353a0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/Slice.html @@ -0,0 +1,157 @@ + + + + +Uses of Class mklab.JGNN.core.Slice + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.Slice

    +
    +
    Packages that use Slice
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/Tensor.html b/docs/javadoc/mklab/JGNN/core/class-use/Tensor.html new file mode 100644 index 00000000..a321a400 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/Tensor.html @@ -0,0 +1,1587 @@ + + + + +Uses of Class mklab.JGNN.core.Tensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.Tensor

    +
    +
    Packages that use Tensor
    + +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/class-use/ThreadPool.html b/docs/javadoc/mklab/JGNN/core/class-use/ThreadPool.html new file mode 100644 index 00000000..1f62ace0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/class-use/ThreadPool.html @@ -0,0 +1,86 @@ + + + + +Uses of Class mklab.JGNN.core.ThreadPool + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.ThreadPool

    +
    +
    Packages that use ThreadPool
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/Normal.html b/docs/javadoc/mklab/JGNN/core/distribution/Normal.html new file mode 100644 index 00000000..00292d1c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/Normal.html @@ -0,0 +1,302 @@ + + + + +Normal + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Normal

    +
    +
    java.lang.Object +
    mklab.JGNN.core.distribution.Normal
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Distribution
    +
    +
    +
    public class Normal +extends Object +implements Distribution
    +
    Implements a Normal Distribution of given mean and standard deviation.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
      +
      Instantiates a normal distribution with zero mean and standard deviation equal to 1.
      +
      +
      Normal(double mean, + double std)
      +
      +
      Instantiates a normal distribution with a given mean and standard deviation.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      double
      + +
      +
      Retrieves the distribution's standard deviation.
      +
      +
      double
      + +
      +
      Retrieves the distribution's mean.
      +
      +
      double
      + +
      +
      Retrieves a new sample from the distribution.
      +
      + +
      setDeviation(double std)
      +
      +
      Sets the standard deviation of the distribution.
      +
      + +
      setMean(double mean)
      +
      +
      Sets the mean of the distribution.
      +
      + +
      setSeed(long seed)
      +
      +
      Sets the distribution's seed.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Normal

        +
        public Normal()
        +
        Instantiates a normal distribution with zero mean and standard deviation equal to 1.
        +
        +
      • +
      • +
        +

        Normal

        +
        public Normal(double mean, + double std)
        +
        Instantiates a normal distribution with a given mean and standard deviation.
        +
        +
        Parameters:
        +
        mean - The distibution's mean.
        +
        std - The distribution's standard deviation.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setSeed

        +
        public Normal setSeed(long seed)
        +
        Description copied from interface: Distribution
        +
        Sets the distribution's seed. This should yield reproducible sampling.
        +
        +
        Specified by:
        +
        setSeed in interface Distribution
        +
        Parameters:
        +
        seed - The distribution's new seed.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        setMean

        +
        public Normal setMean(double mean)
        +
        Description copied from interface: Distribution
        +
        Sets the mean of the distribution.
        +
        +
        Specified by:
        +
        setMean in interface Distribution
        +
        Parameters:
        +
        mean - The new mean.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        setDeviation

        +
        public Normal setDeviation(double std)
        +
        Description copied from interface: Distribution
        +
        Sets the standard deviation of the distribution.
        +
        +
        Specified by:
        +
        setDeviation in interface Distribution
        +
        Parameters:
        +
        std - The new standard deviation.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        getMean

        +
        public double getMean()
        +
        Description copied from interface: Distribution
        +
        Retrieves the distribution's mean.
        +
        +
        Specified by:
        +
        getMean in interface Distribution
        +
        Returns:
        +
        The mean value.
        +
        +
        +
      • +
      • +
        +

        getDeviation

        +
        public double getDeviation()
        +
        Description copied from interface: Distribution
        +
        Retrieves the distribution's standard deviation.
        +
        +
        Specified by:
        +
        getDeviation in interface Distribution
        +
        Returns:
        +
        The standard deviation.
        +
        +
        +
      • +
      • +
        +

        sample

        +
        public double sample()
        +
        Description copied from interface: Distribution
        +
        Retrieves a new sample from the distribution.
        +
        +
        Specified by:
        +
        sample in interface Distribution
        +
        Returns:
        +
        A double value.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/Uniform.html b/docs/javadoc/mklab/JGNN/core/distribution/Uniform.html new file mode 100644 index 00000000..7c1922b5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/Uniform.html @@ -0,0 +1,323 @@ + + + + +Uniform + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Uniform

    +
    +
    java.lang.Object +
    mklab.JGNN.core.distribution.Uniform
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Distribution
    +
    +
    +
    public class Uniform +extends Object +implements Distribution
    +
    Implements a Uniform Distribution of given bounds.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
      +
      Instantiates a uniform distribution that samples values from the range [0,1].
      +
      +
      Uniform(double from, + double to)
      +
      +
      Instantiates a uniform distribution that samples values from the given range [from, to].
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      double
      + +
      +
      Retrieves the distribution's standard deviation.
      +
      +
      double
      + +
      +
      Retrieves the distribution's mean.
      +
      +
      double
      + +
      +
      Retrieves a new sample from the distribution.
      +
      + +
      setDeviation(double std)
      +
      +
      Sets the standard deviation of the distribution.
      +
      + +
      setMean(double mean)
      +
      +
      Sets the mean of the distribution.
      +
      + +
      setRange(double from, + double to)
      +
      +
      Sets the random of the uniform distribution.
      +
      + +
      setSeed(long seed)
      +
      +
      Sets the distribution's seed.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Uniform

        +
        public Uniform()
        +
        Instantiates a uniform distribution that samples values from the range [0,1].
        +
        +
      • +
      • +
        +

        Uniform

        +
        public Uniform(double from, + double to)
        +
        Instantiates a uniform distribution that samples values from the given range [from, to].
        +
        +
        Parameters:
        +
        from - The minimum value of the distribution.
        +
        to - The maximum value of the distribution.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setRange

        +
        public Uniform setRange(double from, + double to)
        +
        Sets the random of the uniform distribution.
        +
        +
        Parameters:
        +
        from - The range's start.
        +
        to - The range's end.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        setSeed

        +
        public Uniform setSeed(long seed)
        +
        Description copied from interface: Distribution
        +
        Sets the distribution's seed. This should yield reproducible sampling.
        +
        +
        Specified by:
        +
        setSeed in interface Distribution
        +
        Parameters:
        +
        seed - The distribution's new seed.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        setMean

        +
        public Uniform setMean(double mean)
        +
        Description copied from interface: Distribution
        +
        Sets the mean of the distribution.
        +
        +
        Specified by:
        +
        setMean in interface Distribution
        +
        Parameters:
        +
        mean - The new mean.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        setDeviation

        +
        public Uniform setDeviation(double std)
        +
        Description copied from interface: Distribution
        +
        Sets the standard deviation of the distribution.
        +
        +
        Specified by:
        +
        setDeviation in interface Distribution
        +
        Parameters:
        +
        std - The new standard deviation.
        +
        Returns:
        +
        this Distribution.
        +
        +
        +
      • +
      • +
        +

        getMean

        +
        public double getMean()
        +
        Description copied from interface: Distribution
        +
        Retrieves the distribution's mean.
        +
        +
        Specified by:
        +
        getMean in interface Distribution
        +
        Returns:
        +
        The mean value.
        +
        +
        +
      • +
      • +
        +

        getDeviation

        +
        public double getDeviation()
        +
        Description copied from interface: Distribution
        +
        Retrieves the distribution's standard deviation.
        +
        +
        Specified by:
        +
        getDeviation in interface Distribution
        +
        Returns:
        +
        The standard deviation.
        +
        +
        +
      • +
      • +
        +

        sample

        +
        public double sample()
        +
        Description copied from interface: Distribution
        +
        Retrieves a new sample from the distribution.
        +
        +
        Specified by:
        +
        sample in interface Distribution
        +
        Returns:
        +
        A double value.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/class-use/Normal.html b/docs/javadoc/mklab/JGNN/core/distribution/class-use/Normal.html new file mode 100644 index 00000000..8feaee9d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/class-use/Normal.html @@ -0,0 +1,90 @@ + + + + +Uses of Class mklab.JGNN.core.distribution.Normal + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.distribution.Normal

    +
    +
    Packages that use Normal
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/class-use/Uniform.html b/docs/javadoc/mklab/JGNN/core/distribution/class-use/Uniform.html new file mode 100644 index 00000000..1f703ad3 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/class-use/Uniform.html @@ -0,0 +1,96 @@ + + + + +Uses of Class mklab.JGNN.core.distribution.Uniform + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.distribution.Uniform

    +
    +
    Packages that use Uniform
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/package-summary.html b/docs/javadoc/mklab/JGNN/core/distribution/package-summary.html new file mode 100644 index 00000000..189f1c61 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/package-summary.html @@ -0,0 +1,109 @@ + + + + +mklab.JGNN.core.distribution + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.core.distribution

    +
    +
    +
    package mklab.JGNN.core.distribution
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/package-tree.html b/docs/javadoc/mklab/JGNN/core/distribution/package-tree.html new file mode 100644 index 00000000..165fe733 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/package-tree.html @@ -0,0 +1,74 @@ + + + + +mklab.JGNN.core.distribution Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.core.distribution

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/distribution/package-use.html b/docs/javadoc/mklab/JGNN/core/distribution/package-use.html new file mode 100644 index 00000000..0e4af0ce --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/distribution/package-use.html @@ -0,0 +1,87 @@ + + + + +Uses of Package mklab.JGNN.core.distribution + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.core.distribution

    +
    + +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/EmptyMatrix.html b/docs/javadoc/mklab/JGNN/core/empy/EmptyMatrix.html new file mode 100644 index 00000000..03b916be --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/EmptyMatrix.html @@ -0,0 +1,363 @@ + + + + +EmptyMatrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class EmptyMatrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.empy.EmptyMatrix
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class EmptyMatrix +extends Matrix
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        EmptyMatrix

        +
        public EmptyMatrix(long rows, + long cols)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/EmptyTensor.html b/docs/javadoc/mklab/JGNN/core/empy/EmptyTensor.html new file mode 100644 index 00000000..f07a0315 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/EmptyTensor.html @@ -0,0 +1,332 @@ + + + + +EmptyTensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class EmptyTensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.empy.EmptyTensor
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class EmptyTensor +extends Tensor
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        EmptyTensor

        +
        public EmptyTensor()
        +
        +
      • +
      • +
        +

        EmptyTensor

        +
        public EmptyTensor(long size)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/class-use/EmptyMatrix.html b/docs/javadoc/mklab/JGNN/core/empy/class-use/EmptyMatrix.html new file mode 100644 index 00000000..f5abf933 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/class-use/EmptyMatrix.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.empy.EmptyMatrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.empy.EmptyMatrix

    +
    +No usage of mklab.JGNN.core.empy.EmptyMatrix
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/class-use/EmptyTensor.html b/docs/javadoc/mklab/JGNN/core/empy/class-use/EmptyTensor.html new file mode 100644 index 00000000..813b981d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/class-use/EmptyTensor.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.empy.EmptyTensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.empy.EmptyTensor

    +
    +No usage of mklab.JGNN.core.empy.EmptyTensor
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/package-summary.html b/docs/javadoc/mklab/JGNN/core/empy/package-summary.html new file mode 100644 index 00000000..013f4f41 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/package-summary.html @@ -0,0 +1,105 @@ + + + + +mklab.JGNN.core.empy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.core.empy

    +
    +
    +
    package mklab.JGNN.core.empy
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/package-tree.html b/docs/javadoc/mklab/JGNN/core/empy/package-tree.html new file mode 100644 index 00000000..955d7126 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/package-tree.html @@ -0,0 +1,82 @@ + + + + +mklab.JGNN.core.empy Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.core.empy

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/empy/package-use.html b/docs/javadoc/mklab/JGNN/core/empy/package-use.html new file mode 100644 index 00000000..25c0b7e8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/empy/package-use.html @@ -0,0 +1,59 @@ + + + + +Uses of Package mklab.JGNN.core.empy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.core.empy

    +
    +No usage of mklab.JGNN.core.empy
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/AccessCol.html b/docs/javadoc/mklab/JGNN/core/matrix/AccessCol.html new file mode 100644 index 00000000..c3248977 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/AccessCol.html @@ -0,0 +1,378 @@ + + + + +AccessCol + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class AccessCol

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.matrix.AccessCol
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class AccessCol +extends Tensor
    +
    Accesses a column of a Matrix as if it were a dense Tensor. + Prefer using Matrix.accessCol(long), which wraps usage + of this class. Instances of this class share elements with the matrix which + they access and do not allocate new memory.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        AccessCol

        +
        public AccessCol(Matrix matrix, + long col)
        +
        Instantiates a see-through access of a matrix column.
        +
        +
        Parameters:
        +
        matrix - The base matrix.
        +
        col - Which column to access.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/AccessRow.html b/docs/javadoc/mklab/JGNN/core/matrix/AccessRow.html new file mode 100644 index 00000000..6eeaedb2 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/AccessRow.html @@ -0,0 +1,378 @@ + + + + +AccessRow + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class AccessRow

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.matrix.AccessRow
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class AccessRow +extends Tensor
    +
    Accesses a row of a Matrix as if it were a dense Tensor. + Prefer using Matrix.accessRow(long), which wraps usage + of this class. Instances of this class share elements with the matrix which + they access and do not allocate new memory.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        AccessRow

        +
        public AccessRow(Matrix matrix, + long row)
        +
        Instantiates a see-through access of a matrix row.
        +
        +
        Parameters:
        +
        matrix - The base matrix.
        +
        row - Which row to access.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/ColumnRepetition.html b/docs/javadoc/mklab/JGNN/core/matrix/ColumnRepetition.html new file mode 100644 index 00000000..50d8be60 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/ColumnRepetition.html @@ -0,0 +1,400 @@ + + + + +ColumnRepetition + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class ColumnRepetition

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.ColumnRepetition
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class ColumnRepetition +extends Matrix
    +
    Defines a matrix whose columns are all a copy of a Tensor. + To avoid potential confusion, setting element values (and all supported operations) throws + an exception.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        ColumnRepetition

        +
        public ColumnRepetition(long times, + Tensor column)
        +
        Instantiates a matrix repeating a tensor to be treated as a column.
        +
        +
        Parameters:
        +
        times - The number of times the column should be repeated.
        +
        column - The column Tensor.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getColumn

        +
        public Tensor getColumn()
        +
        Retrieves the wrapped column tensor.
        +
        +
        Returns:
        +
        The wrapped Tensor.
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/DenseMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/DenseMatrix.html new file mode 100644 index 00000000..af2c773c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/DenseMatrix.html @@ -0,0 +1,452 @@ + + + + +DenseMatrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class DenseMatrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.DenseMatrix
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class DenseMatrix +extends Matrix
    +
    Implements a dense Matrix where all elements are stored in memory. + For matrices with more than MAXINT number of elements or many zeros use the SparseMatrix + structure.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        DenseMatrix

        +
        public DenseMatrix(long rows, + long cols)
        +
        Generates a dense matrix with the designated number of rows and columns.
        +
        +
        Parameters:
        +
        rows - The number of rows.
        +
        cols - The number of columns.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        matmul

        +
        public Matrix matmul(Matrix with)
        +
        Description copied from class: Matrix
        +
        Performs the matrix multiplication of this*with and the recipient.
        +
        +
        Overrides:
        +
        matmul in class Matrix
        +
        Parameters:
        +
        with - The matrix to multiply with.
        +
        Returns:
        +
        A matrix that stores the outcome of the multiplication.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        matmul

        +
        public Matrix matmul(Matrix with, + boolean transposeThis, + boolean transposeWith)
        +
        Description copied from class: Matrix
        +
        Can be used to perform fast computation of the matrix multiplications +
        this*with, +
        this.transposed()*with +
        this*with.transposed(), +
        this.transposed()*with.transposed() +
        while avoiding the overhead of calling + Matrix.transposed(). In this first of those cases, this operation + becomes equivalent to Matrix.matmul(Matrix).
        +
        +
        Overrides:
        +
        matmul in class Matrix
        +
        Parameters:
        +
        with - The matrix to multiply with.
        +
        transposeThis - Whether this matrix should be transposed before multiplication.
        +
        transposeWith - Whether the multiplied with matrix should be transposed before multiplication.
        +
        Returns:
        +
        A matrix that stores the outcome of the multiplication.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/Diagonal.html b/docs/javadoc/mklab/JGNN/core/matrix/Diagonal.html new file mode 100644 index 00000000..26c583fb --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/Diagonal.html @@ -0,0 +1,371 @@ + + + + +Diagonal + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Diagonal

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.Diagonal
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class Diagonal +extends Matrix
    +
    Implements a square matrix whose diagonal elements are determined by the correspond values of + an underlying tensor and off-diagonal elements are zero. Elements are shared between the matrix + and its diagonal tensor. This structure is similar to a sparse matrix.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/RepeatMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/RepeatMatrix.html new file mode 100644 index 00000000..11bfefa6 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/RepeatMatrix.html @@ -0,0 +1,378 @@ + + + + +RepeatMatrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class RepeatMatrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.RepeatMatrix
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class RepeatMatrix +extends Matrix
    +
    Implements a Matrix whose elements are all equals.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        RepeatMatrix

        +
        public RepeatMatrix(double value, + long rows, + long cols)
        +
        Generates a dense matrix with the designated number of rows and columns.
        +
        +
        Parameters:
        +
        rows - The number of rows.
        +
        cols - The number of columns.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/RowRepetition.html b/docs/javadoc/mklab/JGNN/core/matrix/RowRepetition.html new file mode 100644 index 00000000..53b96015 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/RowRepetition.html @@ -0,0 +1,384 @@ + + + + +RowRepetition + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class RowRepetition

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.RowRepetition
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class RowRepetition +extends Matrix
    +
    Defines a matrix whose rows are all a copy of a Tensor. + To avoid potential confusion, setting element values (and all supported operations) throws + an exception.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        RowRepetition

        +
        public RowRepetition(Tensor row, + long times)
        +
        Instantiates a matrix repeating a tensor to be treated as a row.
        +
        +
        Parameters:
        +
        column - The row Tensor.
        +
        times - The number of times the row should be repeated.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/SparseMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/SparseMatrix.html new file mode 100644 index 00000000..b488b2ec --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/SparseMatrix.html @@ -0,0 +1,426 @@ + + + + +SparseMatrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class SparseMatrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.SparseMatrix
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class SparseMatrix +extends Matrix
    +
    A sparse Matrix that allocates memory only for non-zero elements. Operations + that involve all matrix elements are slower compared to a DenseMatrix.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        SparseMatrix

        +
        public SparseMatrix(long rows, + long cols)
        +
        Generates a sparse matrix with the designated number of rows and columns.
        +
        +
        Parameters:
        +
        rows - The number of rows.
        +
        cols - The number of columns.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Description copied from class: Tensor
        +
        Describes the type, size and other characteristics of the tensor.
        +
        +
        Overrides:
        +
        describe in class Matrix
        +
        Returns:
        +
        A String description.
        +
        +
        +
      • +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/SparseSymmetric.html b/docs/javadoc/mklab/JGNN/core/matrix/SparseSymmetric.html new file mode 100644 index 00000000..8f4ff2a5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/SparseSymmetric.html @@ -0,0 +1,417 @@ + + + + +SparseSymmetric + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class SparseSymmetric

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class SparseSymmetric +extends Matrix
    +
    Deprecated. +
    Under development.
    +
    +
    Defines a SparseMatrix that is constrained to be symmetric + in that it returns the sum of values put on elements (i,j) and (j ,i).
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        SparseSymmetric

        +
        public SparseSymmetric(long rows, + long cols)
        +
        Deprecated.
        +
        Generates a symmetric matrix with the designated number of rows and columns.
        +
        +
        Parameters:
        +
        rows - The number of rows.
        +
        cols - The number of columns.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Deprecated.
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Deprecated.
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Deprecated.
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Deprecated.
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Deprecated.
        +
        Description copied from class: Tensor
        +
        Describes the type, size and other characteristics of the tensor.
        +
        +
        Overrides:
        +
        describe in class Matrix
        +
        Returns:
        +
        A String description.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Deprecated.
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Deprecated.
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Deprecated.
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/TransposedMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/TransposedMatrix.html new file mode 100644 index 00000000..374f143a --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/TransposedMatrix.html @@ -0,0 +1,439 @@ + + + + +TransposedMatrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class TransposedMatrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.TransposedMatrix
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class TransposedMatrix +extends Matrix
    +
    Generates a transposed version of a base matrix, with which it shares elements. + This avoids excessive memory allocation and can be used to quickly perform + operations with a transposed version of a matrix. Prefer using + Matrix.asTransposed(), which wraps usage of this class.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        TransposedMatrix

        +
        public TransposedMatrix(Matrix matrix)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        asTransposed

        +
        public Matrix asTransposed()
        +
        Description copied from class: Matrix
        +
        Creates a transposed version of the matrix that accesses the same elements (thus, editing one + edits the other) without allocating additional memory.
        +
        +
        Overrides:
        +
        asTransposed in class Matrix
        +
        Returns:
        +
        A TransposedMatrix.
        +
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Description copied from class: Tensor
        +
        Describes the type, size and other characteristics of the tensor.
        +
        +
        Overrides:
        +
        describe in class Matrix
        +
        Returns:
        +
        A String description.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/VectorizedMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/VectorizedMatrix.html new file mode 100644 index 00000000..7c7913e9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/VectorizedMatrix.html @@ -0,0 +1,475 @@ + + + + +VectorizedMatrix + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class VectorizedMatrix

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.VectorizedMatrix
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class VectorizedMatrix +extends Matrix
    +
    Implements a dense Matrix where all elements are stored in memory. + For matrices with more than MAXINT number of elements or many zeros use the SparseMatrix + structure.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Field Details

      + +
      +
    • + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        VectorizedMatrix

        +
        public VectorizedMatrix(long rows, + long cols)
        +
        Generates a dense matrix with the designated number of rows and columns.
        +
        +
        Parameters:
        +
        rows - The number of rows.
        +
        cols - The number of columns.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        matmul

        +
        public Matrix matmul(Matrix with)
        +
        Description copied from class: Matrix
        +
        Performs the matrix multiplication of this*with and the recipient.
        +
        +
        Overrides:
        +
        matmul in class Matrix
        +
        Parameters:
        +
        with - The matrix to multiply with.
        +
        Returns:
        +
        A matrix that stores the outcome of the multiplication.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        matmul

        +
        public Matrix matmul(Matrix with, + boolean transposeThis, + boolean transposeWith)
        +
        Description copied from class: Matrix
        +
        Can be used to perform fast computation of the matrix multiplications +
        this*with, +
        this.transposed()*with +
        this*with.transposed(), +
        this.transposed()*with.transposed() +
        while avoiding the overhead of calling + Matrix.transposed(). In this first of those cases, this operation + becomes equivalent to Matrix.matmul(Matrix).
        +
        +
        Overrides:
        +
        matmul in class Matrix
        +
        Parameters:
        +
        with - The matrix to multiply with.
        +
        transposeThis - Whether this matrix should be transposed before multiplication.
        +
        transposeWith - Whether the multiplied with matrix should be transposed before multiplication.
        +
        Returns:
        +
        A matrix that stores the outcome of the multiplication.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/WrapCols.html b/docs/javadoc/mklab/JGNN/core/matrix/WrapCols.html new file mode 100644 index 00000000..4d59bf91 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/WrapCols.html @@ -0,0 +1,458 @@ + + + + +WrapCols + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class WrapCols

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.WrapCols
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class WrapCols +extends Matrix
    +
    Wraps a list of tensors into a matrix with the tensors as columns. + Does not allocate additional elements. Editing the matrix edits + the original tensors and conversely. +
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        WrapCols

        +
        public WrapCols(Tensor... cols)
        +
        +
      • +
      • +
        +

        WrapCols

        +
        public WrapCols(List<Tensor> cols)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setZeroCopyType

        +
        public WrapCols setZeroCopyType(Matrix zeroCopyType)
        +
        Sets a prototype matrix from which to borrow copying operations.
        +
        +
        Parameters:
        +
        zeroCopyType - A Matrix instance from which to borrow zeroCopy(long, long).
        +
        Returns:
        +
        this object
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessCol

        +
        public Tensor accessCol(long col)
        +
        Description copied from class: Matrix
        +
        Retrieves the given column as a tensor. Editing the result + also edits the original matrix. + No new memory is allocated for matrix values.
        +
        +
        Overrides:
        +
        accessCol in class Matrix
        +
        Parameters:
        +
        col - The given column.
        +
        Returns:
        +
        An AccessCol of the corresponding column.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/WrapRows.html b/docs/javadoc/mklab/JGNN/core/matrix/WrapRows.html new file mode 100644 index 00000000..8eaad533 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/WrapRows.html @@ -0,0 +1,458 @@ + + + + +WrapRows + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class WrapRows

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.Matrix +
    mklab.JGNN.core.matrix.WrapRows
    +
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class WrapRows +extends Matrix
    +
    Wraps a list of tensors into a matrix with the tensors as rows. + Does not allocate additional elements. Editing the matrix edits + the original tensors and conversely. +
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        WrapRows

        +
        public WrapRows(Tensor... rows)
        +
        +
      • +
      • +
        +

        WrapRows

        +
        public WrapRows(List<Tensor> rows)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setZeroCopyType

        +
        public WrapRows setZeroCopyType(Matrix zeroCopyType)
        +
        Sets a prototype matrix from which to borrow copying operations.
        +
        +
        Parameters:
        +
        zeroCopyType - A Matrix instance from which to borrow zeroCopy(long, long).
        +
        Returns:
        +
        this object
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Matrix zeroCopy(long rows, + long cols)
        +
        Description copied from class: Matrix
        +
        Creates a matrix of the same class and all element set to zero, but with + a given number of rows and columns.
        +
        +
        Specified by:
        +
        zeroCopy in class Matrix
        +
        Parameters:
        +
        rows - The number of rows of the matrix.
        +
        cols - The number of columns of the matrix.
        +
        Returns:
        +
        A Matrix of the same class.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        getNonZeroEntries

        +
        public Iterable<Map.Entry<Long,Long>> getNonZeroEntries()
        +
        Description copied from class: Matrix
        +
        Retrieves an iterable that traverses (row, col) entry pairs + of non zero entries.
        +
        +
        Specified by:
        +
        getNonZeroEntries in class Matrix
        +
        Returns:
        +
        An Entry iterable.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        accessRow

        +
        public Tensor accessRow(long row)
        +
        Description copied from class: Matrix
        +
        Retrieves the given row as a tensor. Editing the result + also edits the original matrix. + No new memory is allocated for matrix values.
        +
        +
        Overrides:
        +
        accessRow in class Matrix
        +
        Parameters:
        +
        row - The given row.
        +
        Returns:
        +
        An AccessRow instance of the corresponding row.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/AccessCol.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/AccessCol.html new file mode 100644 index 00000000..9a90d33d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/AccessCol.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.AccessCol + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.AccessCol

    +
    +No usage of mklab.JGNN.core.matrix.AccessCol
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/AccessRow.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/AccessRow.html new file mode 100644 index 00000000..c0713668 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/AccessRow.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.AccessRow + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.AccessRow

    +
    +No usage of mklab.JGNN.core.matrix.AccessRow
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/ColumnRepetition.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/ColumnRepetition.html new file mode 100644 index 00000000..ae15dfe9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/ColumnRepetition.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.ColumnRepetition + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.ColumnRepetition

    +
    +No usage of mklab.JGNN.core.matrix.ColumnRepetition
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/DenseMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/DenseMatrix.html new file mode 100644 index 00000000..21ffef49 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/DenseMatrix.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.DenseMatrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.DenseMatrix

    +
    +No usage of mklab.JGNN.core.matrix.DenseMatrix
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/Diagonal.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/Diagonal.html new file mode 100644 index 00000000..f848dfd8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/Diagonal.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.Diagonal + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.Diagonal

    +
    +No usage of mklab.JGNN.core.matrix.Diagonal
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/RepeatMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/RepeatMatrix.html new file mode 100644 index 00000000..4da2565c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/RepeatMatrix.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.RepeatMatrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.RepeatMatrix

    +
    +No usage of mklab.JGNN.core.matrix.RepeatMatrix
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/RowRepetition.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/RowRepetition.html new file mode 100644 index 00000000..441bbda0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/RowRepetition.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.RowRepetition + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.RowRepetition

    +
    +No usage of mklab.JGNN.core.matrix.RowRepetition
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/SparseMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/SparseMatrix.html new file mode 100644 index 00000000..3caf74f9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/SparseMatrix.html @@ -0,0 +1,86 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.SparseMatrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.SparseMatrix

    +
    +
    Packages that use SparseMatrix
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/SparseSymmetric.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/SparseSymmetric.html new file mode 100644 index 00000000..fab4171d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/SparseSymmetric.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.SparseSymmetric + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.SparseSymmetric

    +
    +No usage of mklab.JGNN.core.matrix.SparseSymmetric
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/TransposedMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/TransposedMatrix.html new file mode 100644 index 00000000..70400140 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/TransposedMatrix.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.TransposedMatrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.TransposedMatrix

    +
    +No usage of mklab.JGNN.core.matrix.TransposedMatrix
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/VectorizedMatrix.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/VectorizedMatrix.html new file mode 100644 index 00000000..df7cf8cf --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/VectorizedMatrix.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.VectorizedMatrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.VectorizedMatrix

    +
    +No usage of mklab.JGNN.core.matrix.VectorizedMatrix
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/WrapCols.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/WrapCols.html new file mode 100644 index 00000000..43dc5477 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/WrapCols.html @@ -0,0 +1,104 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.WrapCols + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.WrapCols

    +
    +
    Packages that use WrapCols
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/class-use/WrapRows.html b/docs/javadoc/mklab/JGNN/core/matrix/class-use/WrapRows.html new file mode 100644 index 00000000..dffa483f --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/class-use/WrapRows.html @@ -0,0 +1,104 @@ + + + + +Uses of Class mklab.JGNN.core.matrix.WrapRows + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.matrix.WrapRows

    +
    +
    Packages that use WrapRows
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/package-summary.html b/docs/javadoc/mklab/JGNN/core/matrix/package-summary.html new file mode 100644 index 00000000..d1906a46 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/package-summary.html @@ -0,0 +1,154 @@ + + + + +mklab.JGNN.core.matrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.core.matrix

    +
    +
    +
    package mklab.JGNN.core.matrix
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/package-tree.html b/docs/javadoc/mklab/JGNN/core/matrix/package-tree.html new file mode 100644 index 00000000..7ab9201b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/package-tree.html @@ -0,0 +1,93 @@ + + + + +mklab.JGNN.core.matrix Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.core.matrix

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/matrix/package-use.html b/docs/javadoc/mklab/JGNN/core/matrix/package-use.html new file mode 100644 index 00000000..7b16a886 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/matrix/package-use.html @@ -0,0 +1,110 @@ + + + + +Uses of Package mklab.JGNN.core.matrix + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.core.matrix

    +
    +
    Packages that use mklab.JGNN.core.matrix
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/package-summary.html b/docs/javadoc/mklab/JGNN/core/package-summary.html new file mode 100644 index 00000000..08db3450 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/package-summary.html @@ -0,0 +1,137 @@ + + + + +mklab.JGNN.core + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.core

    +
    +
    +
    package mklab.JGNN.core
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/package-tree.html b/docs/javadoc/mklab/JGNN/core/package-tree.html new file mode 100644 index 00000000..54b214e7 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/package-tree.html @@ -0,0 +1,87 @@ + + + + +mklab.JGNN.core Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.core

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +

    Interface Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/package-use.html b/docs/javadoc/mklab/JGNN/core/package-use.html new file mode 100644 index 00000000..caed193e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/package-use.html @@ -0,0 +1,362 @@ + + + + +Uses of Package mklab.JGNN.core + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.core

    +
    +
    Packages that use mklab.JGNN.core
    + +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/AccessSubtensor.html b/docs/javadoc/mklab/JGNN/core/tensor/AccessSubtensor.html new file mode 100644 index 00000000..7e64b2c4 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/AccessSubtensor.html @@ -0,0 +1,364 @@ + + + + +AccessSubtensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class AccessSubtensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.tensor.AccessSubtensor
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class AccessSubtensor +extends Tensor
    +
    Wraps a base Tensor by traversing only its elements in a specified range (from begin, up to end-1). + Although in principle it does not require a specific type of base tensor, it is created with optimized + DenseTensor operations in mind. That is, it implements traverseNonZeroElements() as a Range. + This class's Tensor.zeroCopy() is marked as unimplemented by throwing an exception, which will also make dependent + operations fail. However, it makes sense that members of this class are only used to access (or modify) the subtensor.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        AccessSubtensor

        +
        public AccessSubtensor(Tensor baseTensor, + long begin)
        +
        Instantiates a see-through access of a tensor elements.
        +
        +
        Parameters:
        +
        baseTensor - The base tensor whose elements to access.
        +
        begin - The first element to access. (This is retrieved by get(0).)
        +
        +
        +
      • +
      • +
        +

        AccessSubtensor

        +
        public AccessSubtensor(Tensor baseTensor, + long begin, + long end)
        +
        Instantiates a see-through access of a tensor elements.
        +
        +
        Parameters:
        +
        baseTensor - The base tensor whose elements to access.
        +
        begin - The first element to access. (This is retrieved by get(0).)
        +
        end - The non-inclusive last element. (The subtensor has size begin-end.)
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        put

        +
        public Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/DenseTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/DenseTensor.html new file mode 100644 index 00000000..af1c0b48 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/DenseTensor.html @@ -0,0 +1,855 @@ + + + + +DenseTensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class DenseTensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.tensor.DenseTensor
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class DenseTensor +extends Tensor
    +
    This class provides a dense Tensor that wraps an array of doubles.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Field Details

      +
        +
      • +
        +

        values

        +
        public double[] values
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        DenseTensor

        +
        public DenseTensor(Iterator<? extends Number> iterator)
        +
        Constructs a dense tensor from an iterator holding + that outputs its values. Tensor size is equal to the + number of extracted values.
        +
        +
        Parameters:
        +
        iterator - The iterator to obtain values from.
        +
        +
        +
      • +
      • +
        +

        DenseTensor

        +
        public DenseTensor(double... values)
        +
        +
      • +
      • +
        +

        DenseTensor

        +
        public DenseTensor(long size)
        +
        Constructs a dense tensor holding zero values.
        +
        +
        Parameters:
        +
        size - The size of the tensor.
        +
        +
        +
      • +
      • +
        +

        DenseTensor

        +
        public DenseTensor(String expr)
        +
        Reconstructs a serialized Tensor (i.e. the outcome of Tensor.toString())
        +
        +
        Parameters:
        +
        expr - A serialized tensor
        +
        Throws:
        +
        IllegalArgumentException - If the serialization is null or empty.
        +
        +
        +
      • +
      • +
        +

        DenseTensor

        +
        public DenseTensor()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        put

        +
        public final Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        put

        +
        public final Tensor put(int pos, + double value)
        +
        +
      • +
      • +
        +

        putAdd

        +
        public final void putAdd(int pos, + double value)
        +
        +
      • +
      • +
        +

        get

        +
        public final double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        get

        +
        public final double get(int pos)
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        add

        +
        public Tensor add(Tensor tensor)
        +
        +
        Overrides:
        +
        add in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to add with
        +
        Returns:
        +
        a new Tensor that stores the outcome of addition
        +
        +
        +
      • +
      • +
        +

        add

        +
        public Tensor add(double value)
        +
        +
        Overrides:
        +
        add in class Tensor
        +
        Parameters:
        +
        value - The value to add to each element
        +
        Returns:
        +
        a new Tensor that stores the outcome of addition
        +
        +
        +
      • +
      • +
        +

        selfAdd

        +
        public Tensor selfAdd(Tensor tensor)
        +
        Description copied from class: Tensor
        +
        Performs in-memory addition to the Tensor, storing the result in itself.
        +
        +
        Overrides:
        +
        selfAdd in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to add (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        selfAdd

        +
        public Tensor selfAdd(double value)
        +
        Description copied from class: Tensor
        +
        Performs in-memory addition to the Tensor, storing the result in itself.
        +
        +
        Overrides:
        +
        selfAdd in class Tensor
        +
        Parameters:
        +
        value - The value to add to each tensor element.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        subtract

        +
        public Tensor subtract(Tensor tensor)
        +
        +
        Overrides:
        +
        subtract in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to subtract
        +
        Returns:
        +
        a new Tensor that stores the outcome of subtraction
        +
        +
        +
      • +
      • +
        +

        selfSubtract

        +
        public Tensor selfSubtract(Tensor tensor)
        +
        Description copied from class: Tensor
        +
        Performs in-memory subtraction from the Tensor, storing the result in itself.
        +
        +
        Overrides:
        +
        selfSubtract in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to subtract (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        multiply

        +
        public Tensor multiply(Tensor tensor)
        +
        +
        Overrides:
        +
        multiply in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to perform element-wise multiplication with.
        +
        Returns:
        +
        A new Tensor that stores the outcome of the multiplication.
        +
        +
        +
      • +
      • +
        +

        multiply

        +
        public Tensor multiply(double value)
        +
        +
        Overrides:
        +
        multiply in class Tensor
        +
        Parameters:
        +
        value - A number to multiply all tensor elements with.
        +
        Returns:
        +
        A new Tensor that stores the outcome of the multiplication.
        +
        +
        +
      • +
      • +
        +

        selfMultiply

        +
        public Tensor selfMultiply(Tensor tensor)
        +
        Description copied from class: Tensor
        +
        Performs in-memory multiplication on the Tensor, storing the result in itself .
        +
        +
        Overrides:
        +
        selfMultiply in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to perform element-wise multiplication with (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        selfMultiply

        +
        public Tensor selfMultiply(double value)
        +
        Description copied from class: Tensor
        +
        Performs in-memory multiplication on the Tensor, storing the result to itself.
        +
        +
        Overrides:
        +
        selfMultiply in class Tensor
        +
        Parameters:
        +
        value - A number to multiply all tensor elements with.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        sqrt

        +
        public Tensor sqrt()
        +
        Description copied from class: Tensor
        +
        Computes the square root of tensor elements.
        +
        +
        Overrides:
        +
        sqrt in class Tensor
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the absolute square root of each element.
        +
        +
        +
      • +
      • +
        +

        selfSqrt

        +
        public Tensor selfSqrt()
        +
        Description copied from class: Tensor
        +
        Performs in-memory set of each element to the square root of its absolute value.
        +
        +
        Overrides:
        +
        selfSqrt in class Tensor
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        expMinusOne

        +
        public Tensor expMinusOne()
        +
        Description copied from class: Tensor
        +
        Computes the exponential minus 1 of tensor elements.
        +
        +
        Overrides:
        +
        expMinusOne in class Tensor
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the operation on each element.
        +
        +
        +
      • +
      • +
        +

        selfExpMinusOne

        +
        public Tensor selfExpMinusOne()
        +
        Description copied from class: Tensor
        +
        Sets the exponential minus 1 of tensor elements.
        +
        +
        Overrides:
        +
        selfExpMinusOne in class Tensor
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        log

        +
        public Tensor log()
        +
        Description copied from class: Tensor
        +
        Computes the logarithm of tensor elements.
        +
        +
        Overrides:
        +
        log in class Tensor
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the logarithm of the absolute of each element.
        +
        +
        +
      • +
      • +
        +

        selfLog

        +
        public Tensor selfLog()
        +
        Description copied from class: Tensor
        +
        Performs in-memory set of each element to the logarithm of its absolute value.
        +
        +
        Overrides:
        +
        selfLog in class Tensor
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        negative

        +
        public Tensor negative()
        +
        Description copied from class: Tensor
        +
        Computes the negative of tensor elements.
        +
        +
        Overrides:
        +
        negative in class Tensor
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the negative of each element.
        +
        +
        +
      • +
      • +
        +

        selfNegative

        +
        public Tensor selfNegative()
        +
        Description copied from class: Tensor
        +
        Performs in-memory set of each element to the negative of itself.
        +
        +
        Overrides:
        +
        selfNegative in class Tensor
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        abs

        +
        public Tensor abs()
        +
        Description copied from class: Tensor
        +
        Computes the absolute value of tensor elements.
        +
        +
        Overrides:
        +
        abs in class Tensor
        +
        Returns:
        +
        A new Tensor that stores the outcome of finding the absolute value of each element.
        +
        +
        +
      • +
      • +
        +

        selfAbs

        +
        public Tensor selfAbs()
        +
        Description copied from class: Tensor
        +
        Performs in-memory set of each element to its absolute value.
        +
        +
        Overrides:
        +
        selfAbs in class Tensor
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        inverse

        +
        public Tensor inverse()
        +
        +
        Overrides:
        +
        inverse in class Tensor
        +
        Returns:
        +
        A new Tensor with inversed each non-zero element.
        +
        +
        +
      • +
      • +
        +

        selfInverse

        +
        public Tensor selfInverse()
        +
        Description copied from class: Tensor
        +
        Performs in-memory the inverse of each non-zero element.
        +
        +
        Overrides:
        +
        selfInverse in class Tensor
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/RepeatTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/RepeatTensor.html new file mode 100644 index 00000000..350303bd --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/RepeatTensor.html @@ -0,0 +1,333 @@ + + + + +RepeatTensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class RepeatTensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.tensor.RepeatTensor
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class RepeatTensor +extends Tensor
    +
    This class provides Tensor whose elements are all equal. + Due to uncertain usage, its put(long, double) operation is unsupported and throws a corresponding exception. + Otherwise, instances of this class behave similarly to DenseTensor but permanently allocate only O(1) memory.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        RepeatTensor

        +
        public RepeatTensor(double value, + long length)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        put

        +
        public final Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public final double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/SparseTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/SparseTensor.html new file mode 100644 index 00000000..1b97d43c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/SparseTensor.html @@ -0,0 +1,373 @@ + + + + +SparseTensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class SparseTensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.tensor.SparseTensor
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class SparseTensor +extends Tensor
    +
    This class provides a sparse Tensor with many zero elements. + Critically, it implements a traverseNonZeroElements() method + that provides the positions of only non-zero elements to speed up computations. + + Speed ups are expected mostly for operations between sparse tensors, + when sparse tensors are added or subtracted TO tense ones and when + sparse tensors are multiplied WITH dense ones.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        SparseTensor

        +
        public SparseTensor(long length)
        +
        +
      • +
      • +
        +

        SparseTensor

        +
        public SparseTensor()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        put

        +
        public final Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public final double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        estimateNumNonZeroElements

        +
        public long estimateNumNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Provides an estimation for the non-zero number of elements stored in the tensor, + where this number is equal to the size for dense tensors, but equal to the actual + number of non-zero elements for sparse tensors. + Basically, this quantity is proportional to the allocated memory.
        +
        +
        Overrides:
        +
        estimateNumNonZeroElements in class Tensor
        +
        Returns:
        +
        A long number equal to or less to the tensor size.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/VectorizedTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/VectorizedTensor.html new file mode 100644 index 00000000..a03b85a8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/VectorizedTensor.html @@ -0,0 +1,600 @@ + + + + +VectorizedTensor + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class VectorizedTensor

    +
    +
    java.lang.Object +
    mklab.JGNN.core.Tensor +
    mklab.JGNN.core.tensor.VectorizedTensor
    +
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>
    +
    +
    +
    public class VectorizedTensor +extends Tensor
    +
    This class provides a dense Tensor that wraps an array of doubles.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Field Details

      +
        +
      • +
        +

        values

        +
        public double[] values
        +
        +
      • +
      • +
        +

        SPECIES

        +
        public static final VectorSpecies<Double> SPECIES
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        VectorizedTensor

        +
        public VectorizedTensor(Iterator<? extends Number> iterator)
        +
        Constructs a dense tensor from an iterator holding that outputs its values. + Tensor size is equal to the number of extracted values.
        +
        +
        Parameters:
        +
        iterator - The iterator to obtain values from.
        +
        +
        +
      • +
      • +
        +

        VectorizedTensor

        +
        public VectorizedTensor(double... values)
        +
        +
      • +
      • +
        +

        VectorizedTensor

        +
        public VectorizedTensor(long size)
        +
        Constructs a dense tensor holding zero values.
        +
        +
        Parameters:
        +
        size - The size of the tensor.
        +
        +
        +
      • +
      • +
        +

        VectorizedTensor

        +
        public VectorizedTensor(String expr)
        +
        Reconstructs a serialized Tensor (i.e. the outcome of Tensor.toString())
        +
        +
        Parameters:
        +
        expr - A serialized tensor
        +
        Throws:
        +
        IllegalArgumentException - If the serialization is null or empty.
        +
        +
        +
      • +
      • +
        +

        VectorizedTensor

        +
        public VectorizedTensor()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        put

        +
        public final Tensor put(long pos, + double value)
        +
        Description copied from class: Tensor
        +
        Assign a value to a tensor element. All tensor operations use this function to wrap + element assignments.
        +
        +
        Specified by:
        +
        put in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        value - The value to assign
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        put

        +
        public final Tensor put(int pos, + double value)
        +
        +
      • +
      • +
        +

        putAdd

        +
        public final void putAdd(int pos, + double value)
        +
        +
      • +
      • +
        +

        get

        +
        public final double get(long pos)
        +
        Description copied from class: Tensor
        +
        Retrieves the value of a tensor element at a given position. All tensor operations use this function to wrap + element retrieval.
        +
        +
        Specified by:
        +
        get in class Tensor
        +
        Parameters:
        +
        pos - The position of the tensor element
        +
        Returns:
        +
        The value of the tensor element
        +
        +
        +
      • +
      • +
        +

        get

        +
        public final double get(int pos)
        +
        +
      • +
      • +
        +

        zeroCopy

        +
        public Tensor zeroCopy(long size)
        +
        Description copied from class: Tensor
        +
        Creates a tensor of the same class with a given size and all element set to zero.
        +
        +
        Specified by:
        +
        zeroCopy in class Tensor
        +
        Parameters:
        +
        size - The size of the new tensor.
        +
        Returns:
        +
        A new tensor.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        traverseNonZeroElements

        +
        public Iterator<Long> traverseNonZeroElements()
        +
        Description copied from class: Tensor
        +
        Retrieves positions within the tensor that may hold non-zero elements. + This guarantees that all non-zero elements positions are traversed + but some of the returned positions could hold zero elements. + For example, DenseTensor traverses all + of its elements this way, whereas SparseTensor + indeed traverses only non-zero elements.
        +
        +
        Specified by:
        +
        traverseNonZeroElements in class Tensor
        +
        Returns:
        +
        An iterator that traverses positions within the tensor.
        +
        +
        +
      • +
      • +
        +

        release

        +
        public void release()
        +
        Description copied from class: Tensor
        +
        If the subclassed tensor allows it, release all memory it takes up + so that the garbage collector will eventually clean it up. This + memory will be released anyway by Java once there are no more + references to the object.
        +
        +
        Specified by:
        +
        release in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        persist

        +
        public void persist()
        +
        Description copied from class: Tensor
        +
        If supported by the subclassed tensor, invalidates calls to + Tensor.release() so that memory is a de-allocated only when + object references expire.
        +
        +
        Specified by:
        +
        persist in class Tensor
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        add

        +
        public Tensor add(Tensor tensor)
        +
        +
        Overrides:
        +
        add in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to add with
        +
        Returns:
        +
        a new Tensor that stores the outcome of addition
        +
        +
        +
      • +
      • +
        +

        selfAdd

        +
        public Tensor selfAdd(Tensor tensor)
        +
        Description copied from class: Tensor
        +
        Performs in-memory addition to the Tensor, storing the result in itself.
        +
        +
        Overrides:
        +
        selfAdd in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to add (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        subtract

        +
        public Tensor subtract(Tensor tensor)
        +
        +
        Overrides:
        +
        subtract in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to subtract
        +
        Returns:
        +
        a new Tensor that stores the outcome of subtraction
        +
        +
        +
      • +
      • +
        +

        selfSubtract

        +
        public Tensor selfSubtract(Tensor tensor)
        +
        Description copied from class: Tensor
        +
        Performs in-memory subtraction from the Tensor, storing the result in itself.
        +
        +
        Overrides:
        +
        selfSubtract in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to subtract (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        multiply

        +
        public Tensor multiply(Tensor tensor)
        +
        +
        Overrides:
        +
        multiply in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to perform element-wise multiplication with.
        +
        Returns:
        +
        A new Tensor that stores the outcome of the multiplication.
        +
        +
        +
      • +
      • +
        +

        selfMultiply

        +
        public Tensor selfMultiply(Tensor tensor)
        +
        Description copied from class: Tensor
        +
        Performs in-memory multiplication on the Tensor, storing the result in itself .
        +
        +
        Overrides:
        +
        selfMultiply in class Tensor
        +
        Parameters:
        +
        tensor - The tensor to perform element-wise multiplication with (it's not affected).
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      • +
        +

        multiply

        +
        public Tensor multiply(double value)
        +
        +
        Overrides:
        +
        multiply in class Tensor
        +
        Parameters:
        +
        value - A number to multiply all tensor elements with.
        +
        Returns:
        +
        A new Tensor that stores the outcome of the multiplication.
        +
        +
        +
      • +
      • +
        +

        selfMultiply

        +
        public Tensor selfMultiply(double value)
        +
        Description copied from class: Tensor
        +
        Performs in-memory multiplication on the Tensor, storing the result to itself.
        +
        +
        Overrides:
        +
        selfMultiply in class Tensor
        +
        Parameters:
        +
        value - A number to multiply all tensor elements with.
        +
        Returns:
        +
        this Tensor instance.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/class-use/AccessSubtensor.html b/docs/javadoc/mklab/JGNN/core/tensor/class-use/AccessSubtensor.html new file mode 100644 index 00000000..e86ba309 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/class-use/AccessSubtensor.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.tensor.AccessSubtensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.tensor.AccessSubtensor

    +
    +No usage of mklab.JGNN.core.tensor.AccessSubtensor
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/class-use/DenseTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/class-use/DenseTensor.html new file mode 100644 index 00000000..727d47bb --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/class-use/DenseTensor.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.tensor.DenseTensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.tensor.DenseTensor

    +
    +No usage of mklab.JGNN.core.tensor.DenseTensor
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/class-use/RepeatTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/class-use/RepeatTensor.html new file mode 100644 index 00000000..60bee9f2 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/class-use/RepeatTensor.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.tensor.RepeatTensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.tensor.RepeatTensor

    +
    +No usage of mklab.JGNN.core.tensor.RepeatTensor
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/class-use/SparseTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/class-use/SparseTensor.html new file mode 100644 index 00000000..8b9aeda9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/class-use/SparseTensor.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.tensor.SparseTensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.tensor.SparseTensor

    +
    +No usage of mklab.JGNN.core.tensor.SparseTensor
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/class-use/VectorizedTensor.html b/docs/javadoc/mklab/JGNN/core/tensor/class-use/VectorizedTensor.html new file mode 100644 index 00000000..3ddfca96 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/class-use/VectorizedTensor.html @@ -0,0 +1,84 @@ + + + + +Uses of Class mklab.JGNN.core.tensor.VectorizedTensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.tensor.VectorizedTensor

    +
    +
    Packages that use VectorizedTensor
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/package-summary.html b/docs/javadoc/mklab/JGNN/core/tensor/package-summary.html new file mode 100644 index 00000000..5daa3c18 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/package-summary.html @@ -0,0 +1,121 @@ + + + + +mklab.JGNN.core.tensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.core.tensor

    +
    +
    +
    package mklab.JGNN.core.tensor
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/package-tree.html b/docs/javadoc/mklab/JGNN/core/tensor/package-tree.html new file mode 100644 index 00000000..62bb13a9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/package-tree.html @@ -0,0 +1,81 @@ + + + + +mklab.JGNN.core.tensor Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.core.tensor

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/tensor/package-use.html b/docs/javadoc/mklab/JGNN/core/tensor/package-use.html new file mode 100644 index 00000000..1eb3c150 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/tensor/package-use.html @@ -0,0 +1,83 @@ + + + + +Uses of Package mklab.JGNN.core.tensor + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.core.tensor

    +
    +
    Packages that use mklab.JGNN.core.tensor
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/FastEntry.html b/docs/javadoc/mklab/JGNN/core/util/FastEntry.html new file mode 100644 index 00000000..11eca642 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/FastEntry.html @@ -0,0 +1,230 @@ + + + + +FastEntry + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class FastEntry<K,V>

    +
    +
    java.lang.Object +
    mklab.JGNN.core.util.FastEntry<K,V>
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Map.Entry<K,V>
    +
    +
    +
    public class FastEntry<K,V> +extends Object +implements Map.Entry<K,V>
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        FastEntry

        +
        public FastEntry()
        +
        +
      • +
      • +
        +

        FastEntry

        +
        public FastEntry(K key, + V value)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getKey

        +
        public K getKey()
        +
        +
        Specified by:
        +
        getKey in interface Map.Entry<K,V>
        +
        +
        +
      • +
      • +
        +

        getValue

        +
        public V getValue()
        +
        +
        Specified by:
        +
        getValue in interface Map.Entry<K,V>
        +
        +
        +
      • +
      • +
        +

        setValue

        +
        public V setValue(V value)
        +
        +
        Specified by:
        +
        setValue in interface Map.Entry<K,V>
        +
        +
        +
      • +
      • +
        +

        setKey

        +
        public void setKey(K key)
        +
        +
      • +
      • +
        +

        toString

        +
        public String toString()
        +
        +
        Overrides:
        +
        toString in class Object
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/Loss.html b/docs/javadoc/mklab/JGNN/core/util/Loss.html new file mode 100644 index 00000000..e1686fe3 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/Loss.html @@ -0,0 +1,490 @@ + + + + +Loss + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Interface Loss

    +
    +
    +
    +
    public interface Loss
    +
    Provides computation and (partial) derivation of popular activation functions + and cross-entropy loss functions.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        sigmoid

        +
        static double sigmoid(double x)
        +
        The sigmoid function 1/(1+exp(-x)).
        +
        +
        Parameters:
        +
        x - The activation of the sigmoid function.
        +
        Returns:
        +
        The sigmoid value.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        tanh

        +
        static double tanh(double x)
        +
        The tanh activation (exp(x)-exp(-x))/(exp(x)+exp(-x))
        +
        +
        Parameters:
        +
        x - The activation of the tanh function.
        +
        Returns:
        +
        The tanh value.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        relu

        +
        static double relu(double x)
        +
        The relu activation x if x > 0, 0 otherwise
        +
        +
        Parameters:
        +
        x - The activation of the relu function.
        +
        Returns:
        +
        The relu value.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        sigmoidDerivative

        +
        static double sigmoidDerivative(double x)
        +
        The derivative of the sigmoid(double) function.
        +
        +
        Parameters:
        +
        x - The activation of the sigmoid function.
        +
        Returns:
        +
        The sigmoid derivative's value.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        tanhDerivative

        +
        static double tanhDerivative(double x)
        +
        The derivative of the tanh(double) function.
        +
        +
        Parameters:
        +
        x - The activation of the tanh function.
        +
        Returns:
        +
        The tanh derivative's value.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        reluDerivative

        +
        static double reluDerivative(double x)
        +
        The derivative of the relu(double) function.
        +
        +
        Parameters:
        +
        x - The activation of the relu function.
        +
        Returns:
        +
        The relu derivative's value.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        crossEntropy

        +
        static double crossEntropy(double output, + double label)
        +
        A cross entropy loss for one sample computes as -label*log(output) -(1-label)*log(1-output). To avoid producing invalid + values, an eps of 1.E-12 is used to constraint the cross entropy in the range [-12, 12].
        +
        +
        Parameters:
        +
        output - The output of a prediction task. Should lie in the range [0,1]
        +
        label - The desired label of the prediction task. Should assume binary values 0 or 1
        +
        Returns:
        +
        The cross entropy value.
        +
        Throws:
        +
        IllegalArgumentException - If outputs out of the range [0,1] or labels are non-binary.
        +
        +
        +
      • +
      • +
        +

        crossEntropyDerivative

        +
        static double crossEntropyDerivative(double output, + double label)
        +
        The derivative of the crossEntropy(double, double) loss. To avoid producing invalid + values, an eps of 1.E-12 is used to constraint the cross entropy in the range [-12, 12], which results + to this derivative being constrained in the range [-1.E12, 1.E12].
        +
        +
        Parameters:
        +
        output - The output of a prediction task. Should lie in the range [0,1]
        +
        label - The desired label of the prediction task. Should assume binary values 0 or 1
        +
        Returns:
        +
        The cross entropy derivative's value.
        +
        Throws:
        +
        IllegalArgumentException - If outputs out of the range [0,1] or labels are non-binary.
        +
        +
        +
      • +
      • +
        +

        crossEntropyDerivativeCategorical

        +
        static double crossEntropyDerivativeCategorical(double output, + double label)
        +
        The derivative of the #crossEntropyCategorical(double, double) loss. To avoid producing invalid + values, an eps of 1.E-12 is used to constraint the cross entropy in the range [-12, 12], which results + to this derivative being constrained in the range [-1.E12, 1.E12].
        +
        +
        Parameters:
        +
        output - The output of a prediction task. Should lie in the range [0,1]
        +
        label - The desired label of the prediction task. Should assume binary values 0 or 1
        +
        Returns:
        +
        The cross entropy derivative's value.
        +
        Throws:
        +
        IllegalArgumentException - If outputs out of the range [0,1] or labels are non-binary.
        +
        +
        +
      • +
      • +
        +

        crossEntropySigmoidDerivative

        +
        static double crossEntropySigmoidDerivative(double x, + double label)
        +
        The derivative of crossEntropy(sigmoid(x), label) with respect to x. This function can avoid + using an eps and is hence more precise than the expression + crossEntropyDerivative(sigmoid(x), label)*sigmoidDerivative(x).
        +
        +
        Parameters:
        +
        x - The activation of the sigmoid function.
        +
        label - The desired label of the prediction task. Should assume binary values 0 or 1
        +
        Returns:
        +
        The cross entropy partial derivative with respect to the activation passed to an intermediate sigmoid transformation.
        +
        Throws:
        +
        IllegalArgumentException - If labels are non-binary.
        +
        +
        +
      • +
      • +
        +

        crossEntropyTanhDerivative

        +
        static double crossEntropyTanhDerivative(double x, + double label)
        +
        The derivative of crossEntropy(tanh(x), label) with respect to x. This function calculates + crossEntropyDerivative(tanh(x), label)*tanhDerivative(x).
        +
        +
        Parameters:
        +
        x - The activation of the tanh function.
        +
        label - The desired label of the prediction task. Should assume binary values 0 or 1
        +
        Returns:
        +
        The cross entropy partial derivative with respect to the activation passed to an intermediate tanh transformation.
        +
        +
        +
      • +
      • +
        +

        sigmoid

        +
        static Tensor sigmoid(Tensor x)
        +
        Applies sigmoid(double) element-by-element.
        +
        +
        Parameters:
        +
        x - The activation tensor of the sigmoid function.
        +
        Returns:
        +
        The tensor of sigmoid values.
        +
        +
        +
      • +
      • +
        +

        tanh

        +
        static Tensor tanh(Tensor x)
        +
        Applies tanh(double) element-by-element.
        +
        +
        Parameters:
        +
        x - The activation tensor of the tanh function.
        +
        Returns:
        +
        The tensor of tanh values.
        +
        +
        +
      • +
      • +
        +

        relu

        +
        static Tensor relu(Tensor x)
        +
        Applies relu(double) element-by-element.
        +
        +
        Parameters:
        +
        x - The activation tensor of the relu function.
        +
        Returns:
        +
        The tensor of relu values.
        +
        +
        +
      • +
      • +
        +

        sigmoidDerivative

        +
        static Tensor sigmoidDerivative(Tensor x)
        +
        Applies sigmoidDerivative(double) function.
        +
        +
        Parameters:
        +
        x - The activation tensor of the sigmoid function.
        +
        Returns:
        +
        The tensor of sigmoid derivative values.
        +
        +
        +
      • +
      • +
        +

        tanhDerivative

        +
        static Tensor tanhDerivative(Tensor x)
        +
        Applies tanhDerivative(double) function.
        +
        +
        Parameters:
        +
        x - The activation tensor of the tanh function.
        +
        Returns:
        +
        The tensor of tanh derivative values.
        +
        +
        +
      • +
      • +
        +

        reluDerivative

        +
        static Tensor reluDerivative(Tensor x)
        +
        Applies reluDerivative(double) function.
        +
        +
        Parameters:
        +
        x - The activation tensor of the relu function.
        +
        Returns:
        +
        The tensor of relu derivative values.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/Range.html b/docs/javadoc/mklab/JGNN/core/util/Range.html new file mode 100644 index 00000000..738b5af8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/Range.html @@ -0,0 +1,210 @@ + + + + +Range + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Range

    +
    +
    java.lang.Object +
    mklab.JGNN.core.util.Range
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Long>, Iterator<Long>
    +
    +
    +
    public class Range +extends Object +implements Iterator<Long>, Iterable<Long>
    +
    Implements an iterator that traverses a range (similar to Python's range(min, max) method). + It is often used by Tensor derived classes to traverse through all + element positions in sequential order.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Range

        +
        public Range(long min, + long max)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      + +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/Range2D.html b/docs/javadoc/mklab/JGNN/core/util/Range2D.html new file mode 100644 index 00000000..b815bcfc --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/Range2D.html @@ -0,0 +1,213 @@ + + + + +Range2D + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Range2D

    +
    +
    java.lang.Object +
    mklab.JGNN.core.util.Range2D
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Iterable<Map.Entry<Long,Long>>, Iterator<Map.Entry<Long,Long>>
    +
    +
    +
    public class Range2D +extends Object +implements Iterator<Map.Entry<Long,Long>>, Iterable<Map.Entry<Long,Long>>
    +
    Implements an iterator that traverses a two-dimensional range (min, max) x (min2, max2). + It is often used by Matrix instances to traverse through all element positions.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    + +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/Sort.html b/docs/javadoc/mklab/JGNN/core/util/Sort.html new file mode 100644 index 00000000..30a930d9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/Sort.html @@ -0,0 +1,169 @@ + + + + +Sort + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Sort

    +
    +
    java.lang.Object +
    mklab.JGNN.core.util.Sort
    +
    +
    +
    +
    public class Sort +extends Object
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Sort

        +
        public Sort()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        sortedIndexes

        +
        public static int[] sortedIndexes(double[] A)
        +
        +
      • +
      • +
        +

        sortedIndexes

        +
        public static int[] sortedIndexes(ArrayList<Double> A)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/class-use/FastEntry.html b/docs/javadoc/mklab/JGNN/core/util/class-use/FastEntry.html new file mode 100644 index 00000000..0133d091 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/class-use/FastEntry.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.util.FastEntry + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.util.FastEntry

    +
    +No usage of mklab.JGNN.core.util.FastEntry
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/class-use/Loss.html b/docs/javadoc/mklab/JGNN/core/util/class-use/Loss.html new file mode 100644 index 00000000..6ff08973 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/class-use/Loss.html @@ -0,0 +1,59 @@ + + + + +Uses of Interface mklab.JGNN.core.util.Loss + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Interface
    mklab.JGNN.core.util.Loss

    +
    +No usage of mklab.JGNN.core.util.Loss
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/class-use/Range.html b/docs/javadoc/mklab/JGNN/core/util/class-use/Range.html new file mode 100644 index 00000000..4c091f69 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/class-use/Range.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.util.Range + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.util.Range

    +
    +No usage of mklab.JGNN.core.util.Range
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/class-use/Range2D.html b/docs/javadoc/mklab/JGNN/core/util/class-use/Range2D.html new file mode 100644 index 00000000..0ca30086 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/class-use/Range2D.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.util.Range2D + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.util.Range2D

    +
    +No usage of mklab.JGNN.core.util.Range2D
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/class-use/Sort.html b/docs/javadoc/mklab/JGNN/core/util/class-use/Sort.html new file mode 100644 index 00000000..f9e3afc8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/class-use/Sort.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.core.util.Sort + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.core.util.Sort

    +
    +No usage of mklab.JGNN.core.util.Sort
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/package-summary.html b/docs/javadoc/mklab/JGNN/core/util/package-summary.html new file mode 100644 index 00000000..37806559 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/package-summary.html @@ -0,0 +1,124 @@ + + + + +mklab.JGNN.core.util + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.core.util

    +
    +
    +
    package mklab.JGNN.core.util
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/package-tree.html b/docs/javadoc/mklab/JGNN/core/util/package-tree.html new file mode 100644 index 00000000..2b51fd76 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/package-tree.html @@ -0,0 +1,82 @@ + + + + +mklab.JGNN.core.util Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.core.util

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +

    Interface Hierarchy

    +
      +
    • mklab.JGNN.core.util.Loss
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/core/util/package-use.html b/docs/javadoc/mklab/JGNN/core/util/package-use.html new file mode 100644 index 00000000..aa4c8b8b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/core/util/package-use.html @@ -0,0 +1,59 @@ + + + + +Uses of Package mklab.JGNN.core.util + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.core.util

    +
    +No usage of mklab.JGNN.core.util
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/Initializer.html b/docs/javadoc/mklab/JGNN/nn/Initializer.html new file mode 100644 index 00000000..5f9aeebb --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/Initializer.html @@ -0,0 +1,178 @@ + + + + +Initializer + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +
    Package mklab.JGNN.nn
    +

    Class Initializer

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Initializer
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    VariancePreservingInitializer
    +
    +
    +
    public abstract class Initializer +extends Object
    +
    This class defines an abstract interface for applying initializers to models.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Initializer

        +
        public Initializer()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        apply

        +
        public abstract Model apply(Model model)
        +
        Applies the initializer to a given model's parameters.
        +
        +
        Parameters:
        +
        model - The given model.
        +
        Returns:
        +
        The given model after parameters are initialized.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/Loss.html b/docs/javadoc/mklab/JGNN/nn/Loss.html new file mode 100644 index 00000000..52b53ad3 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/Loss.html @@ -0,0 +1,219 @@ + + + + +Loss + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +
    Package mklab.JGNN.nn
    +

    Class Loss

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Loss
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    Accuracy, BinaryCrossEntropy, CategoricalCrossEntropy, VerboseLoss
    +
    +
    +
    public abstract class Loss +extends Object
    +
    This class provides an abstract implementation of loss functions + to be used during Model training. Preferred use is by + passing loss instances to ModelTrainings.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
       
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      abstract Tensor
      +
      derivative(Tensor output, + Tensor desired)
      +
      +
      Provides the derivative of a loss function at its evaluation point.
      +
      +
      abstract double
      +
      evaluate(Tensor output, + Tensor desired)
      +
      +
      Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Loss

        +
        public Loss()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        evaluate

        +
        public abstract double evaluate(Tensor output, + Tensor desired)
        +
        Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
        +
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A double value (is negative if smaller + values are better).
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        derivative

        +
        public abstract Tensor derivative(Tensor output, + Tensor desired)
        +
        Provides the derivative of a loss function at its evaluation point.
        +
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A Tensor compliant to the model's estimation.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/Model.html b/docs/javadoc/mklab/JGNN/nn/Model.html new file mode 100644 index 00000000..bc1cb3b5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/Model.html @@ -0,0 +1,527 @@ + + + + +Model + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +
    Package mklab.JGNN.nn
    +

    Class Model

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Model
    +
    +
    +
    +
    public class Model +extends Object
    +
    This class is a way to organize NNOperation trees into trainable machine + learning models. Critically, only model inputs and outputs need to be defined. It also + provides methods that perform training by calling forward and backward passes. + Models can have multiple inputs and outputs.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Model

        +
        public Model()
        +
        Deprecated. +
        Prefer using model builders for symbolic model definitions.
        +
        +
        Instantiates an empty model.
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        init

        +
        public Model init(Initializer initializer)
        +
        Apply the provided initializer on the model to set first values to its + parameters.
        +
        +
        Parameters:
        +
        initializer - An Initializer.
        +
        Returns:
        +
        The model's instance.
        +
        +
        +
      • +
      • +
        +

        train

        +
        public Model train(ModelTraining trainer, + Matrix features, + Matrix labels, + Slice trainingSamples, + Slice validationSamples)
        +
        Trains the model by appropriately calling + ModelTraining.train(Model, Matrix, Matrix, Slice, Slice) + with the provided parameters.
        +
        +
        Parameters:
        +
        trainer - The ModelTraining instance in charge of the training.
        +
        features - A training feature Matrix, where each sample resides in one row.
        +
        labels - A training label Matrix corresponding to features.
        +
        trainingSamples - A slice of samples to use for training.
        +
        validationSamples - A slice of samples to use for validation.
        +
        Returns:
        +
        The model's instance.
        +
        +
        +
      • +
      • +
        +

        getDepthLastOperations

        +
        public ArrayList<NNOperation> getDepthLastOperations()
        +
        Retrieves a list of operations by traversing the model's execution + graph with the depth-first algorithm in the inverse edge + order (starting from the outputs). This can be used by Initializer + classes to push non-linearities to earlier layers.
        +
        +
        Returns:
        +
        A list of NNOperation.
        +
        +
        +
      • +
      • +
        +

        getParameters

        +
        public ArrayList<Parameter> getParameters()
        +
        Retrieves a list of all parameters eventually leading to the model's outputs.
        +
        +
        Returns:
        +
        A list of Parameter.
        +
        +
        +
      • +
      • +
        +

        addInput

        +
        public Model addInput(Variable input)
        +
        Adds to the model's inputs the provided Variable.
        +
        +
        Parameters:
        +
        input - A variable to set as an input.
        +
        Returns:
        +
        this Model instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        addOutput

        +
        public Model addOutput(NNOperation output)
        +
        Adds to the model's output the output of the provided operation.
        +
        +
        Parameters:
        +
        output - An operation to set as an output.
        +
        Returns:
        +
        this Model instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getInputs

        +
        public ArrayList<Variable> getInputs()
        +
        Retrieves a list of model inputs. Editing this list affects + the model and is not recommended. Input order is based on + the chronological addition of inputs through addInput(Variable).
        +
        +
        Returns:
        +
        A list of Variable instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getOutputs

        +
        public ArrayList<NNOperation> getOutputs()
        +
        Retrieves a list of model outputs. Editing this list affects + the model and is not recommended. Output order is based on + the chronological addition of outputs through addOutput(NNOperation).
        +
        +
        Returns:
        +
        A list of Variable instances.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        predict

        +
        public ArrayList<Tensor> predict(Tensor... inputs)
        +
        Forward run of the model given an array of input tensors. + Wraps predict(List).
        +
        +
        Parameters:
        +
        inputs - Input tensors to be assigned to input variables.
        +
        Returns:
        +
        A list of tensors output by the model after a forward pass.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        predict

        +
        public ArrayList<Tensor> predict(List<Tensor> inputs)
        +
        Forward run of the model given a list of input tensors. Their order should match the order + of variables in getInputs().
        +
        +
        Parameters:
        +
        inputs - A list of tensors to be assigned to input variables. These should have
        +
        Returns:
        +
        A list of tensors output by the model after a forward pass.
        +
        Throws:
        +
        IllegalArgumentException - if the number of input tensors does not match the number of input variables.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        train

        +
        public List<Tensor> train(Loss loss, + Optimizer optimizer, + List<Tensor> inputs, + List<Tensor> desiredOutputs)
        +
        Performs the training of #train(Optimizer, List, List, List) for unit weights.
        +
        +
        Parameters:
        +
        loss - The loss to minimize.
        +
        optimizer - The provided optimizer with which to adjust values.
        +
        inputs - A list of input tensors that would be passed to a corresponding predict(List) call. + Element values should be either 1 or 0.
        +
        desiredOutputs - A list of output tensors that would be ideally returned by the prediction. + Element values should lie in the rage [0,1].
        +
        Returns:
        +
        A list of prediction outputs (the ones computed before parameter adjustment)
        +
        Throws:
        +
        IllegalArgumentException - If the number of inputs and desired outputs do not match the sizes of getInputs() + and getOutputs() respectively, or if the number of weight tensor do not match the number of desired outputs.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        train

        +
        public List<Tensor> train(Loss loss, + Optimizer optimizer, + List<Tensor> inputs, + List<Tensor> desiredOutputs, + List<Tensor> weights)
        +
        Performs one parameter adjustment step (e.g. epoch) using Optimizer for a cross entropy loss function + that compares desired outputs and the ones predict(List) yields for the given inputs.
        +
        +
        Parameters:
        +
        loss - The loss to minimize.
        +
        optimizer - The provided optimizer with which to adjust values.
        +
        inputs - A list of input tensors that would be passed to a corresponding predict(List) call. + Element values should be either 1 or 0.
        +
        desiredOutputs - A list of output tensors that would be ideally returned by the prediction. + Element values should lie in the rage [0,1].
        +
        weights - A list of weight tensors to be applied element-by-element on the outcome of + Loss#crossEntropyDerivative(double, double).
        +
        Returns:
        +
        A list of prediction outputs (the ones computed before parameter adjustment)
        +
        Throws:
        +
        IllegalArgumentException - If the number of inputs and desired outputs do not match the sizes of getInputs() + and getOutputs() respectively, or if the number of weight tensor do not match the number of desired outputs.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        trainTowardsZero

        +
        public double trainTowardsZero(Optimizer optimizer, + List<Tensor> inputs)
        +
        Is equivalent to calling train(Loss, Optimizer, List, List) + for new Zero() loss. + Use this to train towards optimizing an explicitly defined loss function.
        +
        +
        Parameters:
        +
        optimizer - The provided optimizer with which to adjust values.
        +
        inputs - A list of input tensors that would be passed to a corresponding predict(List) call.
        +
        Returns:
        +
        The L2 loss (computed before parameter adjustment)
        +
        Throws:
        +
        IllegalArgumentException - If the number of inputs and outputs do not match the sizes of getInputs() + and getOutputs() respectively.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/ModelTraining.html b/docs/javadoc/mklab/JGNN/nn/ModelTraining.html new file mode 100644 index 00000000..9009b221 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/ModelTraining.html @@ -0,0 +1,377 @@ + + + + +ModelTraining + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +
    Package mklab.JGNN.nn
    +

    Class ModelTraining

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.ModelTraining
    +
    +
    +
    +
    public class ModelTraining +extends Object
    +
    This is a helper class that automates the definition of training processes of Model instances + by defining the number of epochs, loss functions, number of batches and the ability to use ThreadPool + for parallelized batch computations.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        ModelTraining

        +
        public ModelTraining()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setVerbose

        +
        public ModelTraining setVerbose(boolean verbose)
        +
        Deprecated. +
        This method was available in earlier JGNN versions but will be gradually phased out. + Instead, wrap the validation loss within VerboseLoss to replicate + the same behavior.
        +
        +
        +
        Parameters:
        +
        verbose - Whether an error message will be printed.
        +
        +
        +
      • +
      • +
        +

        setLoss

        +
        public ModelTraining setLoss(Loss loss)
        +
        Set
        +
        +
        Parameters:
        +
        loss -
        +
        Returns:
        +
        +
        +
      • +
      • +
        +

        setValidationLoss

        +
        public ModelTraining setValidationLoss(Loss loss)
        +
        +
      • +
      • +
        +

        setOptimizer

        +
        public ModelTraining setOptimizer(Optimizer optimizer)
        +
        Sets an Optimizer instance to controls parameter updates during training. + If the provided optimizer is not an instance of BatchOptimizer, + it is forcefully wrapped by the latter. Training calls the batch optimizer's + update method after every batch.
        +
        +
        Parameters:
        +
        optimizer - The desired optimizer.
        +
        Returns:
        +
        this model training instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setNumBatches

        +
        public ModelTraining setNumBatches(int numBatches)
        +
        Sets the number of batches training data slices should be split into.
        +
        +
        Parameters:
        +
        numBatches - The desired number of batches. Default is 1.
        +
        Returns:
        +
        this model training instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setParallelizedStochasticGradientDescent

        +
        public ModelTraining setParallelizedStochasticGradientDescent(boolean paralellization)
        +
        Sets whether the training strategy should reflect stochastic + gradient descent by randomly sampling from the training dataset to obtain data samples. + If true, both this feature and acceptable thread-based paralellization + is enabled. Parallelization makes use of JGNN's ThreadPool.
        +
        +
        Parameters:
        +
        paralellization - A boolean value indicating whether this feature is enabled.
        +
        Returns:
        +
        this model training instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setEpochs

        +
        public ModelTraining setEpochs(int epochs)
        +
        Sets the maximum number of epochs for which training runs. + If no patience has been set, training runs for exactly this + number of epochs.
        +
        +
        Parameters:
        +
        epochs - The maximum number of epochs.
        +
        Returns:
        +
        this model training instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        setPatience

        +
        public ModelTraining setPatience(int patience)
        +
        Sets the patience of the training strategy that performs early stopping. + If training does not encounter a smaller validation loss for this number of + epochs, it stops.
        +
        +
        Parameters:
        +
        patience - The number of patience epochs. Default is Integer.MAX_VALUE to effectively disable this + feature and let training always reach the maximum number of set epochs.
        +
        Returns:
        +
        this model training instance.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        train

        +
        public Model train(Model model, + Matrix features, + Matrix labels, + Slice trainingSamples, + Slice validationSamples)
        +
        Trains a Model instance based on current settings.
        +
        +
        Parameters:
        +
        model - The model instance to train.
        +
        features - A matrix whose columns correspond to sample features.
        +
        labels - A matrix whose columns correspond to sample (one hot) labels.
        +
        trainingSamples - Which columns to select for training.
        +
        Returns:
        +
        The trained model (the same instance as the first argument).
        +
        +
        +
      • +
      • +
        +

        configFrom

        +
        public ModelTraining configFrom(ModelBuilder modelBuilder)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/NNOperation.html b/docs/javadoc/mklab/JGNN/nn/NNOperation.html new file mode 100644 index 00000000..a19926b8 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/NNOperation.html @@ -0,0 +1,458 @@ + + + + +NNOperation + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +
    Package mklab.JGNN.nn
    +

    Class NNOperation

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    Add, Attention, Complement, Concat, Dropout, Exp, From, Gather, Identity, L1, Log, LRelu, MatMul, Max, Mean, Multiply, NExp, Parameter, PRelu, Reduce, Relu, Repeat, Reshape, Sigmoid, SoftMax, Sort, Sum, Tanh, To, Transpose
    +
    +
    +
    public abstract class NNOperation +extends Object
    +
    This class defines an abstract neural network operation with forward and + backpropagation capabilities. Defined operations create execution trees based + on input dependencies, which can then be run by Model instances to + make predictions. Creating the execution tree can be done by using the + addInput(NNOperation) method. The correct number of inputs should be + added to each operation. Compliance to this rule needs to be checked by + individual operations during forward passes.
    + Operations are thread-safe in the sense that they store gradients for + backward passes on different objects across different threads. This, way + models can perform learning passes which are all synchronized when eventually + backpropagation feeds Parameter updates to an + Optimizer.
    + The internal state of operations can be obtained with + getPrediction() to obtain their last Tensor output (this + output is depends on the thread calling the operation) and + getLastTapeError() to obtain the last gradient obtained through + backpropagation.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Field Details

      +
        +
      • +
        +

        debugging

        +
        public boolean debugging
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setDescription

        +
        public void setDescription(String description)
        +
        +
      • +
      • +
        +

        getDescription

        +
        public String getDescription()
        +
        +
      • +
      • +
        +

        describe

        +
        public String describe()
        +
        Retrieves an concise description of the operation that shows metadata and + potential data descriptions processed by the current thread.
        +
        +
        Returns:
        +
        A String description.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        view

        +
        public String view()
        +
        Retrieves a string that views internal data being processed by the current + thread, including gradients. This may
        +
        +
        Returns:
        +
        A String view.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        getInputs

        +
        public ArrayList<NNOperation> getInputs()
        +
        Retrieves a list of input operations within a model's execution graph.
        +
        +
        Returns:
        +
        A list of NNOperations.
        +
        +
        +
      • +
      • +
        +

        getOutputs

        +
        public ArrayList<NNOperation> getOutputs()
        +
        Retrieves a list of output operations within a model's execution graph.
        +
        +
        Returns:
        +
        A list of NNOperations.
        +
        +
        +
      • +
      • +
        +

        isConstant

        +
        public boolean isConstant()
        +
        Checks whether the operation yields a constant output, so that propagation + does not try to compute partial derivatives for it.
        +
        +
        Returns:
        +
        A boolean value.
        +
        +
        +
      • +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      • +
        +

        clearPrediction

        +
        public final void clearPrediction()
        +
        +
      • +
      • +
        +

        addInput

        +
        public NNOperation addInput(NNOperation inputComponent)
        +
        +
      • +
      • +
        +

        getLastTapeError

        +
        public final Tensor getLastTapeError()
        +
        +
      • +
      • +
        +

        getPrediction

        +
        public final Tensor getPrediction()
        +
        +
      • +
      • +
        +

        runPrediction

        +
        public final Tensor runPrediction()
        +
        +
      • +
      • +
        +

        run

        +
        public final Tensor run(List<Tensor> inputs)
        +
        Performs a forward pass in the operation without inducing any kind of + learning or storing the outcome. This is just a way to replicate the + operation at the tensor level and does not affect or is affected by any + dependent inputs addInput(mklab.JGNN.nn.NNOperation).
        +
        +
        Parameters:
        +
        inputs - A list of input tensors needed by the operation.
        +
        Returns:
        +
        A Tensor with the operation's outcome.
        +
        +
        +
      • +
      • +
        +

        run

        +
        public final Tensor run(Tensor... inputs)
        +
        Performs a forward pass in the operation without inducing any kind of + learning or storing the outcome. This is just a way to replicate the + operation at the tensor level and does not affect or is affected by any + dependent inputs addInput(mklab.JGNN.nn.NNOperation).
        +
        +
        Parameters:
        +
        inputs - A list of input tensors needed by the operation.
        +
        Returns:
        +
        A Tensor with the operation's outcome.
        +
        +
        +
      • +
      • +
        +

        getSimpleDescription

        +
        public String getSimpleDescription()
        +
        Provides a simple description to show when drawing .dot format diagrams.
        +
        +
        Returns:
        +
        A string description, usually the component's class name.
        +
        +
        +
      • +
      • +
        +

        runPredictionAndAutosize

        +
        public Tensor runPredictionAndAutosize()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/Optimizer.html b/docs/javadoc/mklab/JGNN/nn/Optimizer.html new file mode 100644 index 00000000..c4b484da --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/Optimizer.html @@ -0,0 +1,163 @@ + + + + +Optimizer + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +
    Package mklab.JGNN.nn
    +

    Interface Optimizer

    +
    +
    +
    +
    All Known Implementing Classes:
    +
    Adam, BatchOptimizer, GradientDescent, Regularization
    +
    +
    +
    public interface Optimizer
    +
    Provides an interface for training tensors. Has a reset() method that starts potential training memory from scratch. + Has an update(Tensor, Tensor) method that, given a current Tensor + and a gradient operates on the former and adjusts its value.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      default void
      + +
      +
      Resets (and lets the garbage collector free) optimizer memory.
      +
      +
      void
      +
      update(Tensor value, + Tensor gradient)
      +
      +
      In-place updates the value of a tensor given its gradient.
      +
      +
      +
      +
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        update

        +
        void update(Tensor value, + Tensor gradient)
        +
        In-place updates the value of a tensor given its gradient. + Some optimizers (e.g. Adama) require the exact same tensor instance to be provided + so as to keep track of its optimization progress. The library makes sure to keep + this constraint.
        +
        +
        Parameters:
        +
        value - The tensor to update.
        +
        gradient - The tensor's gradient.
        +
        +
        +
      • +
      • +
        +

        reset

        +
        default void reset()
        +
        Resets (and lets the garbage collector free) optimizer memory. + Should be called at the beginning of training (not after each epoch).
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/Exp.html b/docs/javadoc/mklab/JGNN/nn/activations/Exp.html new file mode 100644 index 00000000..19068fe5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/Exp.html @@ -0,0 +1,203 @@ + + + + +Exp + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Exp

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.Exp
    +
    +
    +
    +
    +
    public class Exp +extends NNOperation
    +
    Implements a NNOperation that performs an exponential transformation of its single input.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Exp

        +
        public Exp()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/L1.html b/docs/javadoc/mklab/JGNN/nn/activations/L1.html new file mode 100644 index 00000000..7b29a15d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/L1.html @@ -0,0 +1,188 @@ + + + + +L1 + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class L1

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.L1
    +
    +
    +
    +
    +
    public class L1 +extends NNOperation
    +
    Implements a NNOperation that performs a L1 transformation of its single input + by row or column.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        L1

        +
        public L1()
        +
        +
      • +
      • +
        +

        L1

        +
        public L1(boolean colMode)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      + +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/LRelu.html b/docs/javadoc/mklab/JGNN/nn/activations/LRelu.html new file mode 100644 index 00000000..5ebbbcdb --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/LRelu.html @@ -0,0 +1,206 @@ + + + + +LRelu + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class LRelu

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.LRelu
    +
    +
    +
    +
    +
    public class LRelu +extends NNOperation
    +
    Implements a NNOperation that performs a leaky relu operation, where the first argument is a tensor on which + it is applied and the second one should be a tensor wrapping a double value (consider initializing this with as a + Constant holding a tensor generated with Tensor.fromDouble(double)) where + the wrapped value indicates the negative region's slope. If the negative slope is zero, leaky relu is reduced to Relu.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        LRelu

        +
        public LRelu()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/NExp.html b/docs/javadoc/mklab/JGNN/nn/activations/NExp.html new file mode 100644 index 00000000..825a9236 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/NExp.html @@ -0,0 +1,204 @@ + + + + +NExp + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class NExp

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.NExp
    +
    +
    +
    +
    +
    public class NExp +extends NNOperation
    +
    Implements a NNOperation that performs an exponential transformation of + its single input, but only on the non-zero elements.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        NExp

        +
        public NExp()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/PRelu.html b/docs/javadoc/mklab/JGNN/nn/activations/PRelu.html new file mode 100644 index 00000000..fa50dbd5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/PRelu.html @@ -0,0 +1,198 @@ + + + + +PRelu + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class PRelu

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.PRelu
    +
    +
    +
    +
    +
    public class PRelu +extends NNOperation
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        PRelu

        +
        public PRelu()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/Relu.html b/docs/javadoc/mklab/JGNN/nn/activations/Relu.html new file mode 100644 index 00000000..95df27df --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/Relu.html @@ -0,0 +1,207 @@ + + + + +Relu + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Relu

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.Relu
    +
    +
    +
    +
    +
    public class Relu +extends NNOperation
    +
    Implements a NNOperation that performs a relu transformation of its single input first introduced by + Hahnloser, Richard HR, Rahul Sarpeshkar, Misha A. Mahowald, Rodney J. Douglas, and H. Sebastian Seung. + "Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit." + Nature 405, no. 6789 (2000): 947-951. +
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Relu

        +
        public Relu()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/Sigmoid.html b/docs/javadoc/mklab/JGNN/nn/activations/Sigmoid.html new file mode 100644 index 00000000..de824361 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/Sigmoid.html @@ -0,0 +1,203 @@ + + + + +Sigmoid + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Sigmoid

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.Sigmoid
    +
    +
    +
    +
    +
    public class Sigmoid +extends NNOperation
    +
    Implements a NNOperation that performs a sigmoid transformation of its single input.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Sigmoid

        +
        public Sigmoid()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/Tanh.html b/docs/javadoc/mklab/JGNN/nn/activations/Tanh.html new file mode 100644 index 00000000..67c24940 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/Tanh.html @@ -0,0 +1,203 @@ + + + + +Tanh + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Tanh

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.activations.Tanh
    +
    +
    +
    +
    +
    public class Tanh +extends NNOperation
    +
    Implements a NNOperation that performs a tanh transformation of its single input.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Tanh

        +
        public Tanh()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/Exp.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Exp.html new file mode 100644 index 00000000..df3d5c8c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Exp.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.Exp + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.Exp

    +
    +No usage of mklab.JGNN.nn.activations.Exp
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/L1.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/L1.html new file mode 100644 index 00000000..1ccc551b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/L1.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.L1 + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.L1

    +
    +No usage of mklab.JGNN.nn.activations.L1
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/LRelu.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/LRelu.html new file mode 100644 index 00000000..3ba3d5ed --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/LRelu.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.LRelu + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.LRelu

    +
    +No usage of mklab.JGNN.nn.activations.LRelu
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/NExp.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/NExp.html new file mode 100644 index 00000000..d1e33bce --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/NExp.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.NExp + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.NExp

    +
    +No usage of mklab.JGNN.nn.activations.NExp
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/PRelu.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/PRelu.html new file mode 100644 index 00000000..865d2f29 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/PRelu.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.PRelu + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.PRelu

    +
    +No usage of mklab.JGNN.nn.activations.PRelu
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/Relu.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Relu.html new file mode 100644 index 00000000..5e4aa76c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Relu.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.Relu + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.Relu

    +
    +No usage of mklab.JGNN.nn.activations.Relu
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/Sigmoid.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Sigmoid.html new file mode 100644 index 00000000..9a5ef820 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Sigmoid.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.Sigmoid + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.Sigmoid

    +
    +No usage of mklab.JGNN.nn.activations.Sigmoid
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/class-use/Tanh.html b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Tanh.html new file mode 100644 index 00000000..3d980cd0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/class-use/Tanh.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.activations.Tanh + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.activations.Tanh

    +
    +No usage of mklab.JGNN.nn.activations.Tanh
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/package-summary.html b/docs/javadoc/mklab/JGNN/nn/activations/package-summary.html new file mode 100644 index 00000000..ff064d88 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/package-summary.html @@ -0,0 +1,129 @@ + + + + +mklab.JGNN.nn.activations + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.activations

    +
    +
    +
    package mklab.JGNN.nn.activations
    +
    +
      +
    • + +
    • +
    • +
      +
      Classes
      +
      +
      Class
      +
      Description
      + +
      +
      Implements a NNOperation that performs an exponential transformation of its single input.
      +
      + +
      +
      Implements a NNOperation that performs a L1 transformation of its single input + by row or column.
      +
      + +
      +
      Implements a NNOperation that performs a leaky relu operation, where the first argument is a tensor on which + it is applied and the second one should be a tensor wrapping a double value (consider initializing this with as a + Constant holding a tensor generated with Tensor.fromDouble(double)) where + the wrapped value indicates the negative region's slope.
      +
      + +
      +
      Implements a NNOperation that performs an exponential transformation of + its single input, but only on the non-zero elements.
      +
      + +
       
      + +
      +
      Implements a NNOperation that performs a relu transformation of its single input first introduced by + Hahnloser, Richard HR, Rahul Sarpeshkar, Misha A.
      +
      + +
      +
      Implements a NNOperation that performs a sigmoid transformation of its single input.
      +
      + +
      +
      Implements a NNOperation that performs a tanh transformation of its single input.
      +
      +
      +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/package-tree.html b/docs/javadoc/mklab/JGNN/nn/activations/package-tree.html new file mode 100644 index 00000000..9e4927bf --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/package-tree.html @@ -0,0 +1,84 @@ + + + + +mklab.JGNN.nn.activations Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.activations

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    +
      +
    • java.lang.Object +
        +
      • mklab.JGNN.nn.NNOperation +
          +
        • mklab.JGNN.nn.activations.Exp
        • +
        • mklab.JGNN.nn.activations.L1
        • +
        • mklab.JGNN.nn.activations.LRelu
        • +
        • mklab.JGNN.nn.activations.NExp
        • +
        • mklab.JGNN.nn.activations.PRelu
        • +
        • mklab.JGNN.nn.activations.Relu
        • +
        • mklab.JGNN.nn.activations.Sigmoid
        • +
        • mklab.JGNN.nn.activations.Tanh
        • +
        +
      • +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/activations/package-use.html b/docs/javadoc/mklab/JGNN/nn/activations/package-use.html new file mode 100644 index 00000000..0653d0f2 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/activations/package-use.html @@ -0,0 +1,59 @@ + + + + +Uses of Package mklab.JGNN.nn.activations + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.activations

    +
    +No usage of mklab.JGNN.nn.activations
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/class-use/Initializer.html b/docs/javadoc/mklab/JGNN/nn/class-use/Initializer.html new file mode 100644 index 00000000..034868df --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/class-use/Initializer.html @@ -0,0 +1,128 @@ + + + + +Uses of Class mklab.JGNN.nn.Initializer + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.Initializer

    +
    +
    Packages that use Initializer
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/class-use/Loss.html b/docs/javadoc/mklab/JGNN/nn/class-use/Loss.html new file mode 100644 index 00000000..f87488f9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/class-use/Loss.html @@ -0,0 +1,166 @@ + + + + +Uses of Class mklab.JGNN.nn.Loss + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.Loss

    +
    +
    Packages that use Loss
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/class-use/Model.html b/docs/javadoc/mklab/JGNN/nn/class-use/Model.html new file mode 100644 index 00000000..ae146244 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/class-use/Model.html @@ -0,0 +1,194 @@ + + + + +Uses of Class mklab.JGNN.nn.Model + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.Model

    +
    +
    Packages that use Model
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/class-use/ModelTraining.html b/docs/javadoc/mklab/JGNN/nn/class-use/ModelTraining.html new file mode 100644 index 00000000..4c6fb458 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/class-use/ModelTraining.html @@ -0,0 +1,142 @@ + + + + +Uses of Class mklab.JGNN.nn.ModelTraining + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.ModelTraining

    +
    +
    Packages that use ModelTraining
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/class-use/NNOperation.html b/docs/javadoc/mklab/JGNN/nn/class-use/NNOperation.html new file mode 100644 index 00000000..adb65c48 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/class-use/NNOperation.html @@ -0,0 +1,382 @@ + + + + +Uses of Class mklab.JGNN.nn.NNOperation + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.NNOperation

    +
    +
    Packages that use NNOperation
    + +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/class-use/Optimizer.html b/docs/javadoc/mklab/JGNN/nn/class-use/Optimizer.html new file mode 100644 index 00000000..32f70c2b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/class-use/Optimizer.html @@ -0,0 +1,199 @@ + + + + +Uses of Interface mklab.JGNN.nn.Optimizer + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Interface
    mklab.JGNN.nn.Optimizer

    +
    +
    Packages that use Optimizer
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/KaimingNormal.html b/docs/javadoc/mklab/JGNN/nn/initializers/KaimingNormal.html new file mode 100644 index 00000000..5124dcf0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/KaimingNormal.html @@ -0,0 +1,141 @@ + + + + +KaimingNormal + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class KaimingNormal

    +
    + +
    +
    +
    public class KaimingNormal +extends VariancePreservingInitializer
    + +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        KaimingNormal

        +
        public KaimingNormal()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/KaimingUniform.html b/docs/javadoc/mklab/JGNN/nn/initializers/KaimingUniform.html new file mode 100644 index 00000000..4df6accb --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/KaimingUniform.html @@ -0,0 +1,141 @@ + + + + +KaimingUniform + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class KaimingUniform

    +
    + +
    +
    +
    public class KaimingUniform +extends VariancePreservingInitializer
    + +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        KaimingUniform

        +
        public KaimingUniform()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/VariancePreservingInitializer.html b/docs/javadoc/mklab/JGNN/nn/initializers/VariancePreservingInitializer.html new file mode 100644 index 00000000..66961891 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/VariancePreservingInitializer.html @@ -0,0 +1,191 @@ + + + + +VariancePreservingInitializer + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class VariancePreservingInitializer

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Initializer +
    mklab.JGNN.nn.initializers.VariancePreservingInitializer
    +
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    KaimingNormal, KaimingUniform, XavierNormal, XavierUniform
    +
    +
    +
    public abstract class VariancePreservingInitializer +extends Initializer
    +
    This class describes a broad class of Initializer strategies, in which + dense neural layer initialization is controlled so that variance is mostly preserved from + inputs to outputs to avoid vanishing or exploding gradients in the first training + runs. +
    + This initializer traverses the execution tree to discover the impact of matrix parameters + to output variances, as eventually determined by backtracking + NNOperation.getNonLinearity(int, double, double) up to non-linear components, + where the latter are identified by the condition getNonLinearity(0, 1, 1)!=1.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        VariancePreservingInitializer

        +
        public VariancePreservingInitializer()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        apply

        +
        public Model apply(Model model)
        +
        Description copied from class: Initializer
        +
        Applies the initializer to a given model's parameters.
        +
        +
        Specified by:
        +
        apply in class Initializer
        +
        Parameters:
        +
        model - The given model.
        +
        Returns:
        +
        The given model after parameters are initialized.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/XavierNormal.html b/docs/javadoc/mklab/JGNN/nn/initializers/XavierNormal.html new file mode 100644 index 00000000..f2ad54c4 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/XavierNormal.html @@ -0,0 +1,141 @@ + + + + +XavierNormal + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class XavierNormal

    +
    + +
    +
    +
    public class XavierNormal +extends VariancePreservingInitializer
    + +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        XavierNormal

        +
        public XavierNormal()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/XavierUniform.html b/docs/javadoc/mklab/JGNN/nn/initializers/XavierUniform.html new file mode 100644 index 00000000..d397f393 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/XavierUniform.html @@ -0,0 +1,141 @@ + + + + +XavierUniform + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class XavierUniform

    +
    + +
    +
    +
    public class XavierUniform +extends VariancePreservingInitializer
    + +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        XavierUniform

        +
        public XavierUniform()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/class-use/KaimingNormal.html b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/KaimingNormal.html new file mode 100644 index 00000000..120292ac --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/KaimingNormal.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.initializers.KaimingNormal + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.initializers.KaimingNormal

    +
    +No usage of mklab.JGNN.nn.initializers.KaimingNormal
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/class-use/KaimingUniform.html b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/KaimingUniform.html new file mode 100644 index 00000000..d4e467ee --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/KaimingUniform.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.initializers.KaimingUniform + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.initializers.KaimingUniform

    +
    +No usage of mklab.JGNN.nn.initializers.KaimingUniform
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/class-use/VariancePreservingInitializer.html b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/VariancePreservingInitializer.html new file mode 100644 index 00000000..59caca3b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/VariancePreservingInitializer.html @@ -0,0 +1,101 @@ + + + + +Uses of Class mklab.JGNN.nn.initializers.VariancePreservingInitializer + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.initializers.VariancePreservingInitializer

    +
    + +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/class-use/XavierNormal.html b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/XavierNormal.html new file mode 100644 index 00000000..8875dfa6 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/XavierNormal.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.initializers.XavierNormal + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.initializers.XavierNormal

    +
    +No usage of mklab.JGNN.nn.initializers.XavierNormal
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/class-use/XavierUniform.html b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/XavierUniform.html new file mode 100644 index 00000000..436c8f4a --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/class-use/XavierUniform.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.initializers.XavierUniform + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.initializers.XavierUniform

    +
    +No usage of mklab.JGNN.nn.initializers.XavierUniform
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/package-summary.html b/docs/javadoc/mklab/JGNN/nn/initializers/package-summary.html new file mode 100644 index 00000000..c3e1c221 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/package-summary.html @@ -0,0 +1,116 @@ + + + + +mklab.JGNN.nn.initializers + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.initializers

    +
    +
    +
    package mklab.JGNN.nn.initializers
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/package-tree.html b/docs/javadoc/mklab/JGNN/nn/initializers/package-tree.html new file mode 100644 index 00000000..67d952b2 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/package-tree.html @@ -0,0 +1,84 @@ + + + + +mklab.JGNN.nn.initializers Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.initializers

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/initializers/package-use.html b/docs/javadoc/mklab/JGNN/nn/initializers/package-use.html new file mode 100644 index 00000000..c2e92358 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/initializers/package-use.html @@ -0,0 +1,86 @@ + + + + +Uses of Package mklab.JGNN.nn.initializers + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.initializers

    +
    +
    Packages that use mklab.JGNN.nn.initializers
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/Constant.html b/docs/javadoc/mklab/JGNN/nn/inputs/Constant.html new file mode 100644 index 00000000..f7455208 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/Constant.html @@ -0,0 +1,226 @@ + + + + +Constant + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Constant

    +
    + +
    +
    +
    public class Constant +extends Parameter
    +
    Implements a NNOperation that holds a constant tensor. + This value *is not affected by learning* but can be manually updated with the #setTo(Tensor) method.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Constant

        +
        public Constant(Tensor tensor)
        +
        Creates a constant holding a tensor.
        +
        +
        Parameters:
        +
        tensor - The held tensor.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        isConstant

        +
        public boolean isConstant()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation yields a constant output, so that propagation + does not try to compute partial derivatives for it.
        +
        +
        Overrides:
        +
        isConstant in class Parameter
        +
        Returns:
        +
        A boolean value.
        +
        +
        +
      • +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class Parameter
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/Parameter.html b/docs/javadoc/mklab/JGNN/nn/inputs/Parameter.html new file mode 100644 index 00000000..096f5743 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/Parameter.html @@ -0,0 +1,275 @@ + + + + +Parameter + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Parameter

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.inputs.Parameter
    +
    +
    +
    +
    +
    Direct Known Subclasses:
    +
    Constant, Variable
    +
    +
    +
    public class Parameter +extends NNOperation
    +
    Implements a NNOperation that holds and returns a parameter tensor. + The held value is tuned by learning.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Parameter

        +
        public Parameter(Tensor tensor)
        +
        +
      • +
      • +
        +

        Parameter

        +
        public Parameter(Tensor tensor, + double regularization)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        set

        +
        public Parameter set(Tensor tensor)
        +
        Forcefully sets the parameter's value tensor to the desired value.
        +
        +
        Parameters:
        +
        tensor - The new parameter value.
        +
        Returns:
        +
        this parameter.
        +
        +
        +
      • +
      • +
        +

        get

        +
        public Tensor get()
        +
        Gets sets the parameter's value tensor
        +
        +
        Returns:
        +
        The current value Tensor.
        +
        +
        +
      • +
      • +
        +

        addInput

        +
        public NNOperation addInput(NNOperation inputComponent)
        +
        +
        Overrides:
        +
        addInput in class NNOperation
        +
        +
        +
      • +
      • +
        +

        isConstant

        +
        public boolean isConstant()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation yields a constant output, so that propagation + does not try to compute partial derivatives for it.
        +
        +
        Overrides:
        +
        isConstant in class NNOperation
        +
        Returns:
        +
        A boolean value.
        +
        +
        +
      • +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class NNOperation
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/Variable.html b/docs/javadoc/mklab/JGNN/nn/inputs/Variable.html new file mode 100644 index 00000000..3c82fab2 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/Variable.html @@ -0,0 +1,228 @@ + + + + +Variable + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Variable

    +
    + +
    +
    +
    public class Variable +extends Parameter
    +
    Implements a NNOperation that represents Model inputs. + Its values can be set using the setTo(Tensor) method.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Variable

        +
        public Variable()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        isConstant

        +
        public boolean isConstant()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation yields a constant output, so that propagation + does not try to compute partial derivatives for it.
        +
        +
        Overrides:
        +
        isConstant in class Parameter
        +
        Returns:
        +
        A boolean value.
        +
        +
        +
      • +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class Parameter
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      • +
        +

        setTo

        +
        public void setTo(Tensor value)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Constant.html b/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Constant.html new file mode 100644 index 00000000..ec3ec3e0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Constant.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.inputs.Constant + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.inputs.Constant

    +
    +No usage of mklab.JGNN.nn.inputs.Constant
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Parameter.html b/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Parameter.html new file mode 100644 index 00000000..2744349e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Parameter.html @@ -0,0 +1,120 @@ + + + + +Uses of Class mklab.JGNN.nn.inputs.Parameter + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.inputs.Parameter

    +
    +
    Packages that use Parameter
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Variable.html b/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Variable.html new file mode 100644 index 00000000..65826024 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/class-use/Variable.html @@ -0,0 +1,97 @@ + + + + +Uses of Class mklab.JGNN.nn.inputs.Variable + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.inputs.Variable

    +
    +
    Packages that use Variable
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/package-summary.html b/docs/javadoc/mklab/JGNN/nn/inputs/package-summary.html new file mode 100644 index 00000000..e3b7a696 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/package-summary.html @@ -0,0 +1,105 @@ + + + + +mklab.JGNN.nn.inputs + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.inputs

    +
    +
    +
    package mklab.JGNN.nn.inputs
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/package-tree.html b/docs/javadoc/mklab/JGNN/nn/inputs/package-tree.html new file mode 100644 index 00000000..45709e14 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/package-tree.html @@ -0,0 +1,82 @@ + + + + +mklab.JGNN.nn.inputs Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.inputs

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/inputs/package-use.html b/docs/javadoc/mklab/JGNN/nn/inputs/package-use.html new file mode 100644 index 00000000..f11e9120 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/inputs/package-use.html @@ -0,0 +1,102 @@ + + + + +Uses of Package mklab.JGNN.nn.inputs + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.inputs

    +
    +
    Packages that use mklab.JGNN.nn.inputs
    +
    +
    Package
    +
    Description
    + +
     
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/Accuracy.html b/docs/javadoc/mklab/JGNN/nn/loss/Accuracy.html new file mode 100644 index 00000000..194ac8c3 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/Accuracy.html @@ -0,0 +1,230 @@ + + + + +Accuracy + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Accuracy

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Loss +
    mklab.JGNN.nn.loss.Accuracy
    +
    +
    +
    +
    +
    public class Accuracy +extends Loss
    +
    Implements an accuracy Loss of row-by-row comparisons. + Each row of the output should have the same Tensor.argmax() + value as the corresponding row of desired values. + This comparison has no derivative.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
      +
      Instantiates a row-by-row Accuracy loss.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      + +
      derivative(Tensor output, + Tensor desired)
      +
      +
      Provides the derivative of a loss function at its evaluation point.
      +
      +
      double
      +
      evaluate(Tensor output, + Tensor desired)
      +
      +
      Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Accuracy

        +
        public Accuracy()
        +
        Instantiates a row-by-row Accuracy loss. + For this loss, each row of the output should have the same + value as the corresponding row of desired values. + This comparison has no derivative.
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        evaluate

        +
        public double evaluate(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
        +
        +
        Specified by:
        +
        evaluate in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A double value (is negative if smaller + values are better).
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        derivative

        +
        public Tensor derivative(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides the derivative of a loss function at its evaluation point.
        +
        +
        Specified by:
        +
        derivative in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A Tensor compliant to the model's estimation.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/BinaryCrossEntropy.html b/docs/javadoc/mklab/JGNN/nn/loss/BinaryCrossEntropy.html new file mode 100644 index 00000000..dd9d4200 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/BinaryCrossEntropy.html @@ -0,0 +1,258 @@ + + + + +BinaryCrossEntropy + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class BinaryCrossEntropy

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Loss +
    mklab.JGNN.nn.loss.BinaryCrossEntropy
    +
    +
    +
    +
    +
    public class BinaryCrossEntropy +extends Loss
    +
    Implements a binary cross-entropy Loss.
    + For more than one output dimensions use CategoricalCrossEntropy
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
      +
      Initializes binary cross entropy with 1.E-12 epsilon value.
      +
      +
      BinaryCrossEntropy(double epsilon)
      +
      +
      Initializes binary cross entropy with and epsilon value + to bound its outputs in the range [log(epsilon), -log(epsilon)] instead of (-inf, inf).
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      + +
      derivative(Tensor output, + Tensor desired)
      +
      +
      Provides the derivative of a loss function at its evaluation point.
      +
      +
      double
      +
      evaluate(Tensor output, + Tensor desired)
      +
      +
      Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        BinaryCrossEntropy

        +
        public BinaryCrossEntropy()
        +
        Initializes binary cross entropy with 1.E-12 epsilon value. + For more than one output dimensions use CategoricalCrossEntropy()
        +
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        BinaryCrossEntropy

        +
        public BinaryCrossEntropy(double epsilon)
        +
        Initializes binary cross entropy with and epsilon value + to bound its outputs in the range [log(epsilon), -log(epsilon)] instead of (-inf, inf). + For more than one output dimensions use CategoricalCrossEntropy(double)
        +
        +
        Parameters:
        +
        epsilon - A very small positive double.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        evaluate

        +
        public double evaluate(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
        +
        +
        Specified by:
        +
        evaluate in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A double value (is negative if smaller + values are better).
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        derivative

        +
        public Tensor derivative(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides the derivative of a loss function at its evaluation point.
        +
        +
        Specified by:
        +
        derivative in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A Tensor compliant to the model's estimation.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/CategoricalCrossEntropy.html b/docs/javadoc/mklab/JGNN/nn/loss/CategoricalCrossEntropy.html new file mode 100644 index 00000000..5328f355 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/CategoricalCrossEntropy.html @@ -0,0 +1,277 @@ + + + + +CategoricalCrossEntropy + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class CategoricalCrossEntropy

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Loss +
    mklab.JGNN.nn.loss.CategoricalCrossEntropy
    +
    +
    +
    +
    +
    public class CategoricalCrossEntropy +extends Loss
    +
    Implements a categorical cross-entropy Loss.
    + For binary classification of one output use BinaryCrossEntropy.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
      +
      Initializes categorical cross entropy with 1.E-12 epsilon value.
      +
      +
      CategoricalCrossEntropy(double epsilon)
      +
      +
      Initializes categorical cross entropy with and epsilon value + to bound its outputs in the range [log(epsilon), -log(epsilon)] instead of (-inf, inf).
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      + +
      derivative(Tensor output, + Tensor desired)
      +
      +
      Provides the derivative of a loss function at its evaluation point.
      +
      +
      double
      +
      evaluate(Tensor output, + Tensor desired)
      +
      +
      Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
      +
      + +
      setMeanReduction(boolean meanReduction)
      +
      +
      Sets the reduction mechanism of categorical cross entropy.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        CategoricalCrossEntropy

        +
        public CategoricalCrossEntropy()
        +
        Initializes categorical cross entropy with 1.E-12 epsilon value. + For binary classification of one output use BinaryCrossEntropy().
        +
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        CategoricalCrossEntropy

        +
        public CategoricalCrossEntropy(double epsilon)
        +
        Initializes categorical cross entropy with and epsilon value + to bound its outputs in the range [log(epsilon), -log(epsilon)] instead of (-inf, inf). + For binary classification of one output use BinaryCrossEntropy(double).
        +
        +
        Parameters:
        +
        epsilon - A very small positive double.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setMeanReduction

        +
        public CategoricalCrossEntropy setMeanReduction(boolean meanReduction)
        +
        Sets the reduction mechanism of categorical cross entropy. + This can be either a sum or a mean across the categorical cross entropy of all data samples.
        +
        +
        Parameters:
        +
        meanReduction - true to perform mean reduction, false (default) for sum reduction.
        +
        Returns:
        +
        this CategoricalCrossEntropy object.
        +
        +
        +
      • +
      • +
        +

        evaluate

        +
        public double evaluate(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
        +
        +
        Specified by:
        +
        evaluate in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A double value (is negative if smaller + values are better).
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        derivative

        +
        public Tensor derivative(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides the derivative of a loss function at its evaluation point.
        +
        +
        Specified by:
        +
        derivative in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A Tensor compliant to the model's estimation.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/class-use/Accuracy.html b/docs/javadoc/mklab/JGNN/nn/loss/class-use/Accuracy.html new file mode 100644 index 00000000..73eb3acc --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/class-use/Accuracy.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.loss.Accuracy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.loss.Accuracy

    +
    +No usage of mklab.JGNN.nn.loss.Accuracy
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/class-use/BinaryCrossEntropy.html b/docs/javadoc/mklab/JGNN/nn/loss/class-use/BinaryCrossEntropy.html new file mode 100644 index 00000000..daebaa81 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/class-use/BinaryCrossEntropy.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.loss.BinaryCrossEntropy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.loss.BinaryCrossEntropy

    +
    +No usage of mklab.JGNN.nn.loss.BinaryCrossEntropy
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/class-use/CategoricalCrossEntropy.html b/docs/javadoc/mklab/JGNN/nn/loss/class-use/CategoricalCrossEntropy.html new file mode 100644 index 00000000..ce9a5d5d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/class-use/CategoricalCrossEntropy.html @@ -0,0 +1,86 @@ + + + + +Uses of Class mklab.JGNN.nn.loss.CategoricalCrossEntropy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.loss.CategoricalCrossEntropy

    +
    +
    Packages that use CategoricalCrossEntropy
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/package-summary.html b/docs/javadoc/mklab/JGNN/nn/loss/package-summary.html new file mode 100644 index 00000000..56d2b3ac --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/package-summary.html @@ -0,0 +1,109 @@ + + + + +mklab.JGNN.nn.loss + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.loss

    +
    +
    +
    package mklab.JGNN.nn.loss
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/package-tree.html b/docs/javadoc/mklab/JGNN/nn/loss/package-tree.html new file mode 100644 index 00000000..aa8fc8c7 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/package-tree.html @@ -0,0 +1,79 @@ + + + + +mklab.JGNN.nn.loss Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.loss

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/package-use.html b/docs/javadoc/mklab/JGNN/nn/loss/package-use.html new file mode 100644 index 00000000..3e5da23f --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/package-use.html @@ -0,0 +1,84 @@ + + + + +Uses of Package mklab.JGNN.nn.loss + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.loss

    +
    +
    Packages that use mklab.JGNN.nn.loss
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/report/VerboseLoss.html b/docs/javadoc/mklab/JGNN/nn/loss/report/VerboseLoss.html new file mode 100644 index 00000000..d2273b71 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/report/VerboseLoss.html @@ -0,0 +1,289 @@ + + + + +VerboseLoss + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class VerboseLoss

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.Loss +
    mklab.JGNN.nn.loss.report.VerboseLoss
    +
    +
    +
    +
    +
    public class VerboseLoss +extends Loss
    +
    Implements a Loss that wraps other losses and outputs their value during training to an output stream + (to System.out by default). This is the simplest loss wrapper to keep track of training progress.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        VerboseLoss

        +
        public VerboseLoss(Loss baseLoss)
        +
        Instantiates a VerboseLoss given a base loss to be wrapped. + Use a method chain to modify when losses should be reported, and which + output stream is used.
        +
        +
        Parameters:
        +
        baseLoss -
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        reset

        +
        public void reset()
        +
        +
      • +
      • +
        +

        setInterval

        +
        public VerboseLoss setInterval(int every)
        +
        Changes on which epochs the loss should be reported.
        +
        +
        Parameters:
        +
        every - The loss is reported on epochs 0, every, 2every, ... Default is 1.
        +
        Returns:
        +
        this verbose loss instance.
        +
        +
        +
      • +
      • +
        +

        setStream

        +
        public VerboseLoss setStream(PrintStream out)
        +
        Changes where the output is printed.
        +
        +
        Parameters:
        +
        out - The print stream to print to. Default is System.out.
        +
        Returns:
        +
        this verbose loss instance.
        +
        +
        +
      • +
      • +
        +

        evaluate

        +
        public double evaluate(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides a numerical evaluation of a loss function, so that + lower values correspond to better predictions.
        +
        +
        Specified by:
        +
        evaluate in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A double value (is negative if smaller + values are better).
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      • +
        +

        derivative

        +
        public Tensor derivative(Tensor output, + Tensor desired)
        +
        Description copied from class: Loss
        +
        Provides the derivative of a loss function at its evaluation point.
        +
        +
        Specified by:
        +
        derivative in class Loss
        +
        Parameters:
        +
        output - A model's estimation of true outputs.
        +
        desired - The expected outputs.
        +
        Returns:
        +
        A Tensor compliant to the model's estimation.
        +
        See Also:
        +
        + +
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/report/class-use/VerboseLoss.html b/docs/javadoc/mklab/JGNN/nn/loss/report/class-use/VerboseLoss.html new file mode 100644 index 00000000..9cc2c0e3 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/report/class-use/VerboseLoss.html @@ -0,0 +1,91 @@ + + + + +Uses of Class mklab.JGNN.nn.loss.report.VerboseLoss + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.loss.report.VerboseLoss

    +
    +
    Packages that use VerboseLoss
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/report/package-summary.html b/docs/javadoc/mklab/JGNN/nn/loss/report/package-summary.html new file mode 100644 index 00000000..e321d266 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/report/package-summary.html @@ -0,0 +1,98 @@ + + + + +mklab.JGNN.nn.loss.report + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.loss.report

    +
    +
    +
    package mklab.JGNN.nn.loss.report
    +
    +
      +
    • + +
    • +
    • +
      +
      Classes
      +
      +
      Class
      +
      Description
      + +
      +
      Implements a Loss that wraps other losses and outputs their value during training to an output stream + (to System.out by default).
      +
      +
      +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/report/package-tree.html b/docs/javadoc/mklab/JGNN/nn/loss/report/package-tree.html new file mode 100644 index 00000000..7496394b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/report/package-tree.html @@ -0,0 +1,77 @@ + + + + +mklab.JGNN.nn.loss.report Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.loss.report

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/loss/report/package-use.html b/docs/javadoc/mklab/JGNN/nn/loss/report/package-use.html new file mode 100644 index 00000000..e98264e4 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/loss/report/package-use.html @@ -0,0 +1,84 @@ + + + + +Uses of Package mklab.JGNN.nn.loss.report + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.loss.report

    +
    +
    Packages that use mklab.JGNN.nn.loss.report
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Add.html b/docs/javadoc/mklab/JGNN/nn/operations/Add.html new file mode 100644 index 00000000..14ba7bc5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Add.html @@ -0,0 +1,148 @@ + + + + +Add + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Add

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Add
    +
    +
    +
    +
    +
    public class Add +extends NNOperation
    +
    Implements a NNOperation that adds its two inputs.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Add

        +
        public Add()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Attention.html b/docs/javadoc/mklab/JGNN/nn/operations/Attention.html new file mode 100644 index 00000000..88c4db6e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Attention.html @@ -0,0 +1,180 @@ + + + + +Attention + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Attention

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Attention
    +
    +
    +
    +
    +
    public class Attention +extends NNOperation
    +
    Implements a NNOperation that creates a version of adjacency matrices + with column-wise attention involving neighbor similarity.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Attention

        +
        public Attention()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      + +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Complement.html b/docs/javadoc/mklab/JGNN/nn/operations/Complement.html new file mode 100644 index 00000000..23ad1fbb --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Complement.html @@ -0,0 +1,148 @@ + + + + +Complement + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Complement

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Complement
    +
    +
    +
    +
    +
    public class Complement +extends NNOperation
    +
    Implements a NNOperation that performs the operation 1-x for its simple input x.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Complement

        +
        public Complement()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Concat.html b/docs/javadoc/mklab/JGNN/nn/operations/Concat.html new file mode 100644 index 00000000..06ac6759 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Concat.html @@ -0,0 +1,148 @@ + + + + +Concat + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Concat

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Concat
    +
    +
    +
    +
    +
    public class Concat +extends NNOperation
    +
    Implements a NNOperation that concatenates its two matrix inputs.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Concat

        +
        public Concat()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Dropout.html b/docs/javadoc/mklab/JGNN/nn/operations/Dropout.html new file mode 100644 index 00000000..81bd14c1 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Dropout.html @@ -0,0 +1,211 @@ + + + + +Dropout + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Dropout

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Dropout
    +
    +
    +
    +
    +
    public class Dropout +extends NNOperation
    +
    Implements a NNOperation that converts its first argument to a ColumnRepetition matrix + with a number of columns equal to the second argument.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Dropout

        +
        public Dropout()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        isEnabled

        +
        public boolean isEnabled()
        +
        +
      • +
      • +
        +

        setEnabled

        +
        public void setEnabled(boolean enabled)
        +
        +
      • +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class NNOperation
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/From.html b/docs/javadoc/mklab/JGNN/nn/operations/From.html new file mode 100644 index 00000000..b1d334a3 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/From.html @@ -0,0 +1,192 @@ + + + + +From + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class From

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.From
    +
    +
    +
    +
    +
    public class From +extends NNOperation
    +
    Implements a NNOperation that lists the first element of the 2D matrix element iterator.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        From

        +
        public From()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class NNOperation
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Gather.html b/docs/javadoc/mklab/JGNN/nn/operations/Gather.html new file mode 100644 index 00000000..327d593e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Gather.html @@ -0,0 +1,148 @@ + + + + +Gather + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Gather

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Gather
    +
    +
    +
    +
    +
    public class Gather +extends NNOperation
    +
    Implements a NNOperation that performs the equivalent of TensorFlow's gather operation.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Gather

        +
        public Gather()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Identity.html b/docs/javadoc/mklab/JGNN/nn/operations/Identity.html new file mode 100644 index 00000000..eaf12686 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Identity.html @@ -0,0 +1,148 @@ + + + + +Identity + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Identity

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Identity
    +
    +
    +
    +
    +
    public class Identity +extends NNOperation
    +
    Implements a NNOperation that just transfers its single input.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Identity

        +
        public Identity()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/LSTM.LSTMState.html b/docs/javadoc/mklab/JGNN/nn/operations/LSTM.LSTMState.html new file mode 100644 index 00000000..5dae2d10 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/LSTM.LSTMState.html @@ -0,0 +1,175 @@ + + + + +LSTM.LSTMState + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class LSTM.LSTMState

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.operations.LSTM.LSTMState
    +
    +
    +
    +
    Enclosing class:
    +
    LSTM
    +
    +
    +
    public static class LSTM.LSTMState +extends Object
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        LSTMState

        +
        public LSTMState(Tensor previousMemory, + Tensor previousOutput)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getMemory

        +
        public Tensor getMemory()
        +
        +
      • +
      • +
        +

        getOutput

        +
        public Tensor getOutput()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/LSTM.html b/docs/javadoc/mklab/JGNN/nn/operations/LSTM.html new file mode 100644 index 00000000..884f995b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/LSTM.html @@ -0,0 +1,270 @@ + + + + +LSTM + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class LSTM

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.operations.LSTM
    +
    +
    +
    +
    public class LSTM +extends Object
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        LSTM

        +
        public LSTM(Optimizer optimizer, + int inputSize, + int outputSize)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getOptimizer

        +
        public Optimizer getOptimizer()
        +
        +
      • +
      • +
        +

        createFirstState

        +
        public LSTM.LSTMState createFirstState()
        +
        +
      • +
      • +
        +

        output

        +
        public LSTM.LSTMState output(Tensor input, + LSTM.LSTMState previousState)
        +
        +
      • +
      • +
        +

        startTape

        +
        public void startTape()
        +
        +
      • +
      • +
        +

        train

        +
        public double train(Tensor[] inputs, + Tensor output)
        +
        +
      • +
      • +
        +

        trainOnOutputError

        +
        public void trainOnOutputError(Tensor[] inputs, + Tensor outputGradient)
        +
        +
      • +
      • +
        +

        predict

        +
        public Tensor predict(Tensor[] inputs)
        +
        +
      • +
      • +
        +

        updateTape

        +
        public Tensor updateTape(Tensor input, + LSTM.LSTMState previousState, + Tensor outputErrorGradient)
        +
        +
      • +
      • +
        +

        endTape

        +
        public void endTape()
        +
        +
      • +
      • +
        +

        aggregate

        +
        public void aggregate(LSTM lstm)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Log.html b/docs/javadoc/mklab/JGNN/nn/operations/Log.html new file mode 100644 index 00000000..51def8e4 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Log.html @@ -0,0 +1,148 @@ + + + + +Log + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Log

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Log
    +
    +
    +
    +
    +
    public class Log +extends NNOperation
    +
    Implements a NNOperation that outputs the natural logarithm of its single input.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Log

        +
        public Log()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/MatMul.html b/docs/javadoc/mklab/JGNN/nn/operations/MatMul.html new file mode 100644 index 00000000..d17ad4d1 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/MatMul.html @@ -0,0 +1,203 @@ + + + + +MatMul + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class MatMul

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.MatMul
    +
    +
    +
    +
    +
    public class MatMul +extends NNOperation
    +
    Implements a NNOperation that multiplies its two matrix inputs.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        MatMul

        +
        public MatMul()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Multiply.html b/docs/javadoc/mklab/JGNN/nn/operations/Multiply.html new file mode 100644 index 00000000..a139837c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Multiply.html @@ -0,0 +1,203 @@ + + + + +Multiply + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Multiply

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Multiply
    +
    +
    +
    +
    +
    public class Multiply +extends NNOperation
    +
    Implements a NNOperation that multiplies its two inputs element-by-element.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Multiply

        +
        public Multiply()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getNonLinearity

        +
        public double getNonLinearity(int inputId, + double inputMass, + double outputNonLinearity)
        +
        Description copied from class: NNOperation
        +
        Retrieves the degree of non-linearity of the operation to be used by + VariancePreservingInitializer. Default is + one for operations like addition, multiplication, and matrix multiplication, + and is different only for activation functions.
        +
        +
        Overrides:
        +
        getNonLinearity in class NNOperation
        +
        Parameters:
        +
        inputId - The input for which the non-linearity is + calculated.
        +
        inputMass - The fraction of (matrix) parameters affecting the + calculation coming from the respective input.
        +
        outputNonLinearity - The output's non-linearity gain.
        +
        Returns:
        +
        double describing the non-linearity.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Reduce.html b/docs/javadoc/mklab/JGNN/nn/operations/Reduce.html new file mode 100644 index 00000000..b47b38ab --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Reduce.html @@ -0,0 +1,143 @@ + + + + +Reduce + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Reduce

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Reduce
    +
    +
    +
    +
    +
    public class Reduce +extends NNOperation
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Reduce

        +
        public Reduce()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Repeat.html b/docs/javadoc/mklab/JGNN/nn/operations/Repeat.html new file mode 100644 index 00000000..b594ba37 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Repeat.html @@ -0,0 +1,149 @@ + + + + +Repeat + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Repeat

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Repeat
    +
    +
    +
    +
    +
    public class Repeat +extends NNOperation
    +
    Implements a NNOperation that converts its first argument to a ColumnRepetition matrix + with a number of columns equal to the second argument.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Repeat

        +
        public Repeat()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Reshape.html b/docs/javadoc/mklab/JGNN/nn/operations/Reshape.html new file mode 100644 index 00000000..a763f458 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Reshape.html @@ -0,0 +1,224 @@ + + + + +Reshape + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Reshape

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Reshape
    +
    +
    +
    +
    +
    public class Reshape +extends NNOperation
    +
    Implements a NNOperation that reshapes a matrix.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Reshape

        +
        public Reshape(long rows, + long cols)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        getSimpleDescription

        +
        public String getSimpleDescription()
        +
        Description copied from class: NNOperation
        +
        Provides a simple description to show when drawing .dot format diagrams.
        +
        +
        Overrides:
        +
        getSimpleDescription in class NNOperation
        +
        Returns:
        +
        A string description, usually the component's class name.
        +
        +
        +
      • +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class NNOperation
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      • +
        +

        setDimensionName

        +
        public Reshape setDimensionName(String rowName, + String colName)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/To.html b/docs/javadoc/mklab/JGNN/nn/operations/To.html new file mode 100644 index 00000000..36eeab15 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/To.html @@ -0,0 +1,198 @@ + + + + +To + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class To

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.To
    +
    +
    +
    +
    +
    public class To +extends NNOperation
    +
    Implements a NNOperation that lists the second element of the 2D matrix element iterator.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    See Also:
    +
    + +
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        To

        +
        public To()
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        isCachable

        +
        public boolean isCachable()
        +
        Description copied from class: NNOperation
        +
        Checks whether the operation's output should be cached given that it is a + constant. This returns false only for randomized components that + yield different outputs from different inputs, such as dropouts.
        +
        +
        Overrides:
        +
        isCachable in class NNOperation
        +
        Returns:
        +
        A boolean values.
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/Transpose.html b/docs/javadoc/mklab/JGNN/nn/operations/Transpose.html new file mode 100644 index 00000000..d5796ea7 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/Transpose.html @@ -0,0 +1,148 @@ + + + + +Transpose + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Transpose

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.operations.Transpose
    +
    +
    +
    +
    +
    public class Transpose +extends NNOperation
    +
    Implements a NNOperation that performs matrix transposition.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Transpose

        +
        public Transpose()
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Add.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Add.html new file mode 100644 index 00000000..91070433 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Add.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Add + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Add

    +
    +No usage of mklab.JGNN.nn.operations.Add
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Attention.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Attention.html new file mode 100644 index 00000000..67ac7c92 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Attention.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Attention + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Attention

    +
    +No usage of mklab.JGNN.nn.operations.Attention
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Complement.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Complement.html new file mode 100644 index 00000000..253655b9 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Complement.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Complement + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Complement

    +
    +No usage of mklab.JGNN.nn.operations.Complement
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Concat.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Concat.html new file mode 100644 index 00000000..e9c64d90 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Concat.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Concat + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Concat

    +
    +No usage of mklab.JGNN.nn.operations.Concat
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Dropout.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Dropout.html new file mode 100644 index 00000000..494b6f62 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Dropout.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Dropout + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Dropout

    +
    +No usage of mklab.JGNN.nn.operations.Dropout
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/From.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/From.html new file mode 100644 index 00000000..ee55a6a7 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/From.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.From + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.From

    +
    +No usage of mklab.JGNN.nn.operations.From
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Gather.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Gather.html new file mode 100644 index 00000000..93198976 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Gather.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Gather + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Gather

    +
    +No usage of mklab.JGNN.nn.operations.Gather
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Identity.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Identity.html new file mode 100644 index 00000000..d6ad398c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Identity.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Identity + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Identity

    +
    +No usage of mklab.JGNN.nn.operations.Identity
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/LSTM.LSTMState.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/LSTM.LSTMState.html new file mode 100644 index 00000000..c1dbb4f2 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/LSTM.LSTMState.html @@ -0,0 +1,103 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.LSTM.LSTMState + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.LSTM.LSTMState

    +
    +
    Packages that use LSTM.LSTMState
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/LSTM.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/LSTM.html new file mode 100644 index 00000000..38e8a716 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/LSTM.html @@ -0,0 +1,84 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.LSTM + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.LSTM

    +
    +
    Packages that use LSTM
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Log.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Log.html new file mode 100644 index 00000000..74737b8f --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Log.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Log + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Log

    +
    +No usage of mklab.JGNN.nn.operations.Log
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/MatMul.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/MatMul.html new file mode 100644 index 00000000..7b72ff18 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/MatMul.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.MatMul + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.MatMul

    +
    +No usage of mklab.JGNN.nn.operations.MatMul
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Multiply.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Multiply.html new file mode 100644 index 00000000..6d38a6e1 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Multiply.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Multiply + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Multiply

    +
    +No usage of mklab.JGNN.nn.operations.Multiply
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Reduce.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Reduce.html new file mode 100644 index 00000000..edef4105 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Reduce.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Reduce + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Reduce

    +
    +No usage of mklab.JGNN.nn.operations.Reduce
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Repeat.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Repeat.html new file mode 100644 index 00000000..ce60a34e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Repeat.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Repeat + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Repeat

    +
    +No usage of mklab.JGNN.nn.operations.Repeat
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Reshape.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Reshape.html new file mode 100644 index 00000000..e23b354b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Reshape.html @@ -0,0 +1,85 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Reshape + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Reshape

    +
    +
    Packages that use Reshape
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/To.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/To.html new file mode 100644 index 00000000..f6a05956 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/To.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.To + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.To

    +
    +No usage of mklab.JGNN.nn.operations.To
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/class-use/Transpose.html b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Transpose.html new file mode 100644 index 00000000..b3023ef6 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/class-use/Transpose.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.operations.Transpose + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.operations.Transpose

    +
    +No usage of mklab.JGNN.nn.operations.Transpose
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/package-summary.html b/docs/javadoc/mklab/JGNN/nn/operations/package-summary.html new file mode 100644 index 00000000..d7734705 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/package-summary.html @@ -0,0 +1,162 @@ + + + + +mklab.JGNN.nn.operations + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.operations

    +
    +
    +
    package mklab.JGNN.nn.operations
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/package-tree.html b/docs/javadoc/mklab/JGNN/nn/operations/package-tree.html new file mode 100644 index 00000000..6137e38a --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/package-tree.html @@ -0,0 +1,94 @@ + + + + +mklab.JGNN.nn.operations Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.operations

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/operations/package-use.html b/docs/javadoc/mklab/JGNN/nn/operations/package-use.html new file mode 100644 index 00000000..5f3e7acf --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/operations/package-use.html @@ -0,0 +1,87 @@ + + + + +Uses of Package mklab.JGNN.nn.operations + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.operations

    +
    +
    Packages that use mklab.JGNN.nn.operations
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/Adam.html b/docs/javadoc/mklab/JGNN/nn/optimizers/Adam.html new file mode 100644 index 00000000..85f5cb99 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/Adam.html @@ -0,0 +1,289 @@ + + + + +Adam + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Adam

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.optimizers.Adam
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Optimizer
    +
    +
    +
    public class Adam +extends Object +implements Optimizer
    +
    Thic class implements an Adam Optimizer as explained in the paper: + Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014). + +
    + It also supports the NDAdam improvement, which ports advantages of SGD to Adam, as introduced in the paper: + Zhang, Zijun. "Improved adam optimizer for deep neural networks." 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS). IEEE, 2018. +
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      + +
      +
      Initializes an NDAdam instance of an Adam optimizer with the default parameters recommended by the papers.
      +
      +
      Adam(boolean NDmode, + double learningRate)
      +
      +
      Initializes an Adam optimizer with the default parameters recommended by the papers + but allows for the specification of the learning rate and whether NDAdam or simple Adam + is used.
      +
      +
      Adam(boolean NDmode, + double learningRate, + double b1, + double b2)
      +
      +
      Initializes an NDAdam instance of an Adam optimizer with the default parameters with custom parameters.
      +
      +
      Adam(boolean NDmode, + double learningRate, + double b1, + double b2, + double epsilon)
      +
       
      +
      Adam(double learningRate)
      +
      +
      Initializes an NDAdam instance of an Adam optimizer with the default parameters recommended by the papers + but allows for the specification of the learning rate.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      void
      + +
      +
      Resets (and lets the garbage collector free) optimizer memory.
      +
      +
      void
      +
      update(Tensor value, + Tensor gradient)
      +
      +
      In-place updates the value of a tensor given its gradient.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Adam

        +
        public Adam()
        +
        Initializes an NDAdam instance of an Adam optimizer with the default parameters recommended by the papers.
        +
        +
      • +
      • +
        +

        Adam

        +
        public Adam(double learningRate)
        +
        Initializes an NDAdam instance of an Adam optimizer with the default parameters recommended by the papers + but allows for the specification of the learning rate.
        +
        +
        Parameters:
        +
        learningRate - The learning rate.
        +
        +
        +
      • +
      • +
        +

        Adam

        +
        public Adam(boolean NDmode, + double learningRate)
        +
        Initializes an Adam optimizer with the default parameters recommended by the papers + but allows for the specification of the learning rate and whether NDAdam or simple Adam + is used.
        +
        +
        Parameters:
        +
        NDmode - Should be true to use NDAdam and false to use simple Adam optimization.
        +
        learningRate - The learning rate.
        +
        +
        +
      • +
      • +
        +

        Adam

        +
        public Adam(boolean NDmode, + double learningRate, + double b1, + double b2)
        +
        Initializes an NDAdam instance of an Adam optimizer with the default parameters with custom parameters.
        +
        +
        Parameters:
        +
        learningRate - The learning rate.
        +
        +
        +
      • +
      • +
        +

        Adam

        +
        public Adam(boolean NDmode, + double learningRate, + double b1, + double b2, + double epsilon)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        update

        +
        public void update(Tensor value, + Tensor gradient)
        +
        Description copied from interface: Optimizer
        +
        In-place updates the value of a tensor given its gradient. + Some optimizers (e.g. Adama) require the exact same tensor instance to be provided + so as to keep track of its optimization progress. The library makes sure to keep + this constraint.
        +
        +
        Specified by:
        +
        update in interface Optimizer
        +
        Parameters:
        +
        value - The tensor to update.
        +
        gradient - The tensor's gradient.
        +
        +
        +
      • +
      • +
        +

        reset

        +
        public void reset()
        +
        Description copied from interface: Optimizer
        +
        Resets (and lets the garbage collector free) optimizer memory. + Should be called at the beginning of training (not after each epoch).
        +
        +
        Specified by:
        +
        reset in interface Optimizer
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/BatchOptimizer.html b/docs/javadoc/mklab/JGNN/nn/optimizers/BatchOptimizer.html new file mode 100644 index 00000000..955636af --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/BatchOptimizer.html @@ -0,0 +1,252 @@ + + + + +BatchOptimizer + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class BatchOptimizer

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.optimizers.BatchOptimizer
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Optimizer
    +
    +
    +
    public class BatchOptimizer +extends Object +implements Optimizer
    +
    Wraps an Optimizer by accumulating derivatives and calling + Optimizer.update(Tensor, Tensor) with the average derivative + after a fixed number of accumulations. Accumulation restarts after + update. Provides a method updateAll() to update all accumulated + derivatives, for example in case the constructor BatchOptimizer(Optimizer) + is used without inputting a fixed number of derivative updates.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      +
      BatchOptimizer(Optimizer baseOptimizer)
      +
      +
      Initializes a BatchOptimizer that accumulates derivatives and updates them + only when updateAll() is called.
      +
      +
      BatchOptimizer(Optimizer baseOptimizer, + long batchSize)
      +
      +
      Initializes a BatchOptimizer that accumulates derivatives and updates them + with updateAll() after every fixed number of updates.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      void
      + +
      +
      Resets (and lets the garbage collector free) optimizer memory.
      +
      +
      void
      +
      update(Tensor value, + Tensor gradient)
      +
      +
      In-place updates the value of a tensor given its gradient.
      +
      +
      void
      + +
      +
      Updates all tracked variables with pending batch calculations using the + wrapped optimizer.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        BatchOptimizer

        +
        public BatchOptimizer(Optimizer baseOptimizer)
        +
        Initializes a BatchOptimizer that accumulates derivatives and updates them + only when updateAll() is called.
        +
        +
        Parameters:
        +
        baseOptimizer - The base optimizer with which to perform the derivative updates.
        +
        +
        +
      • +
      • +
        +

        BatchOptimizer

        +
        public BatchOptimizer(Optimizer baseOptimizer, + long batchSize)
        +
        Initializes a BatchOptimizer that accumulates derivatives and updates them + with updateAll() after every fixed number of updates.
        +
        +
        Parameters:
        +
        baseOptimizer - The base optimizer with which to perform the derivative updates.
        +
        batchSize - The number of updates at which to pass the average accumulation to the base optimizer.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        updateAll

        +
        public void updateAll()
        +
        Updates all tracked variables with pending batch calculations using the + wrapped optimizer. This overrides any potential waiting over pre-specified batch sizes.
        +
        +
      • +
      • +
        +

        update

        +
        public void update(Tensor value, + Tensor gradient)
        +
        Description copied from interface: Optimizer
        +
        In-place updates the value of a tensor given its gradient. + Some optimizers (e.g. Adama) require the exact same tensor instance to be provided + so as to keep track of its optimization progress. The library makes sure to keep + this constraint.
        +
        +
        Specified by:
        +
        update in interface Optimizer
        +
        Parameters:
        +
        value - The tensor to update.
        +
        gradient - The tensor's gradient.
        +
        +
        +
      • +
      • +
        +

        reset

        +
        public void reset()
        +
        Description copied from interface: Optimizer
        +
        Resets (and lets the garbage collector free) optimizer memory. + Should be called at the beginning of training (not after each epoch).
        +
        +
        Specified by:
        +
        reset in interface Optimizer
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/GradientDescent.html b/docs/javadoc/mklab/JGNN/nn/optimizers/GradientDescent.html new file mode 100644 index 00000000..a43a575a --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/GradientDescent.html @@ -0,0 +1,229 @@ + + + + +GradientDescent + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class GradientDescent

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.optimizers.GradientDescent
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Optimizer
    +
    +
    +
    public class GradientDescent +extends Object +implements Optimizer
    +
    Implements a gradient descent Optimizer. It supports degrading learning rates.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      +
      GradientDescent(double learningRate)
      +
      +
      Initializes a GradientDescent optimizer with fixed learning rate.
      +
      +
      GradientDescent(double learningRate, + double degradation)
      +
      +
      Initializes a GradientDescent optimizer with degrading learning rate.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      void
      + +
      +
      Resets (and lets the garbage collector free) optimizer memory.
      +
      +
      void
      +
      update(Tensor value, + Tensor gradient)
      +
      +
      In-place updates the value of a tensor given its gradient.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        GradientDescent

        +
        public GradientDescent(double learningRate)
        +
        Initializes a GradientDescent optimizer with fixed learning rate.
        +
        +
        Parameters:
        +
        learningRate - The learning rate.
        +
        +
        +
      • +
      • +
        +

        GradientDescent

        +
        public GradientDescent(double learningRate, + double degradation)
        +
        Initializes a GradientDescent optimizer with degrading learning rate.
        +
        +
        Parameters:
        +
        learningRate - The learning rate.
        +
        degradation - The quantity to multiply each tensor's learning rate with after each iteration.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        update

        +
        public void update(Tensor value, + Tensor gradient)
        +
        Description copied from interface: Optimizer
        +
        In-place updates the value of a tensor given its gradient. + Some optimizers (e.g. Adama) require the exact same tensor instance to be provided + so as to keep track of its optimization progress. The library makes sure to keep + this constraint.
        +
        +
        Specified by:
        +
        update in interface Optimizer
        +
        Parameters:
        +
        value - The tensor to update.
        +
        gradient - The tensor's gradient.
        +
        +
        +
      • +
      • +
        +

        reset

        +
        public void reset()
        +
        Description copied from interface: Optimizer
        +
        Resets (and lets the garbage collector free) optimizer memory. + Should be called at the beginning of training (not after each epoch).
        +
        +
        Specified by:
        +
        reset in interface Optimizer
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/Regularization.html b/docs/javadoc/mklab/JGNN/nn/optimizers/Regularization.html new file mode 100644 index 00000000..6d640ba4 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/Regularization.html @@ -0,0 +1,215 @@ + + + + +Regularization + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Regularization

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.optimizers.Regularization
    +
    +
    +
    +
    All Implemented Interfaces:
    +
    Optimizer
    +
    +
    +
    public class Regularization +extends Object +implements Optimizer
    +
    Wraps an Optimizer by applying the derivative of L2 loss + on every tensor during Optimizer.update(Tensor, Tensor).
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    +
      + +
    • +
      +

      Constructor Summary

      +
      Constructors
      +
      +
      Constructor
      +
      Description
      +
      Regularization(Optimizer baseOptimizer, + double regularization)
      +
      +
      Initializes a Regularization.
      +
      +
      +
      +
    • + +
    • +
      +

      Method Summary

      +
      +
      +
      +
      +
      Modifier and Type
      +
      Method
      +
      Description
      +
      void
      + +
      +
      Resets (and lets the garbage collector free) optimizer memory.
      +
      +
      void
      +
      update(Tensor value, + Tensor gradient)
      +
      +
      In-place updates the value of a tensor given its gradient.
      +
      +
      +
      +
      +
      +

      Methods inherited from class java.lang.Object

      +equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      +
      +
    • +
    +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Regularization

        +
        public Regularization(Optimizer baseOptimizer, + double regularization)
        +
        Initializes a Regularization.
        +
        +
        Parameters:
        +
        baseOptimizer - The base optimizer on which to apply regularization.
        +
        regularization - The weight of the regularization.
        +
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        update

        +
        public void update(Tensor value, + Tensor gradient)
        +
        Description copied from interface: Optimizer
        +
        In-place updates the value of a tensor given its gradient. + Some optimizers (e.g. Adama) require the exact same tensor instance to be provided + so as to keep track of its optimization progress. The library makes sure to keep + this constraint.
        +
        +
        Specified by:
        +
        update in interface Optimizer
        +
        Parameters:
        +
        value - The tensor to update.
        +
        gradient - The tensor's gradient.
        +
        +
        +
      • +
      • +
        +

        reset

        +
        public void reset()
        +
        Description copied from interface: Optimizer
        +
        Resets (and lets the garbage collector free) optimizer memory. + Should be called at the beginning of training (not after each epoch).
        +
        +
        Specified by:
        +
        reset in interface Optimizer
        +
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/Adam.html b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/Adam.html new file mode 100644 index 00000000..77998dc6 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/Adam.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.optimizers.Adam + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.optimizers.Adam

    +
    +No usage of mklab.JGNN.nn.optimizers.Adam
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/BatchOptimizer.html b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/BatchOptimizer.html new file mode 100644 index 00000000..35dffe3d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/BatchOptimizer.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.optimizers.BatchOptimizer + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.optimizers.BatchOptimizer

    +
    +No usage of mklab.JGNN.nn.optimizers.BatchOptimizer
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/GradientDescent.html b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/GradientDescent.html new file mode 100644 index 00000000..b3cdc354 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/GradientDescent.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.optimizers.GradientDescent + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.optimizers.GradientDescent

    +
    +No usage of mklab.JGNN.nn.optimizers.GradientDescent
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/Regularization.html b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/Regularization.html new file mode 100644 index 00000000..5bd000c1 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/class-use/Regularization.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.optimizers.Regularization + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.optimizers.Regularization

    +
    +No usage of mklab.JGNN.nn.optimizers.Regularization
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/package-summary.html b/docs/javadoc/mklab/JGNN/nn/optimizers/package-summary.html new file mode 100644 index 00000000..31f73393 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/package-summary.html @@ -0,0 +1,113 @@ + + + + +mklab.JGNN.nn.optimizers + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.optimizers

    +
    +
    +
    package mklab.JGNN.nn.optimizers
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/package-tree.html b/docs/javadoc/mklab/JGNN/nn/optimizers/package-tree.html new file mode 100644 index 00000000..b990cdca --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/package-tree.html @@ -0,0 +1,76 @@ + + + + +mklab.JGNN.nn.optimizers Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.optimizers

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/optimizers/package-use.html b/docs/javadoc/mklab/JGNN/nn/optimizers/package-use.html new file mode 100644 index 00000000..75c5545e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/optimizers/package-use.html @@ -0,0 +1,59 @@ + + + + +Uses of Package mklab.JGNN.nn.optimizers + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.optimizers

    +
    +No usage of mklab.JGNN.nn.optimizers
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/package-summary.html b/docs/javadoc/mklab/JGNN/nn/package-summary.html new file mode 100644 index 00000000..6359547c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/package-summary.html @@ -0,0 +1,140 @@ + + + + +mklab.JGNN.nn + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn

    +
    +
    +
    package mklab.JGNN.nn
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/package-tree.html b/docs/javadoc/mklab/JGNN/nn/package-tree.html new file mode 100644 index 00000000..fe905be1 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/package-tree.html @@ -0,0 +1,83 @@ + + + + +mklab.JGNN.nn Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    + +
    +
    +

    Interface Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/package-use.html b/docs/javadoc/mklab/JGNN/nn/package-use.html new file mode 100644 index 00000000..24a598c0 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/package-use.html @@ -0,0 +1,264 @@ + + + + +Uses of Package mklab.JGNN.nn + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn

    +
    +
    Packages that use mklab.JGNN.nn
    + +
    +
      +
    • +
      +
      Classes in mklab.JGNN.nn used by mklab.JGNN.adhoc
      +
      +
      Class
      +
      Description
      + +
      +
      This class is a way to organize NNOperation trees into trainable machine + learning models.
      +
      + +
      +
      This class defines an abstract neural network operation with forward and + backpropagation capabilities.
      +
      +
      +
      +
    • +
    • +
      +
      Classes in mklab.JGNN.nn used by mklab.JGNN.nn
      +
      +
      Class
      +
      Description
      + +
      +
      This class defines an abstract interface for applying initializers to models.
      +
      + +
      +
      This class provides an abstract implementation of loss functions + to be used during Model training.
      +
      + +
      +
      This class is a way to organize NNOperation trees into trainable machine + learning models.
      +
      + +
      +
      This is a helper class that automates the definition of training processes of Model instances + by defining the number of epochs, loss functions, number of batches and the ability to use ThreadPool + for parallelized batch computations.
      +
      + +
      +
      This class defines an abstract neural network operation with forward and + backpropagation capabilities.
      +
      + +
      +
      Provides an interface for training tensors.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class defines an abstract neural network operation with forward and + backpropagation capabilities.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class defines an abstract interface for applying initializers to models.
      +
      + +
      +
      This class is a way to organize NNOperation trees into trainable machine + learning models.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class defines an abstract neural network operation with forward and + backpropagation capabilities.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class provides an abstract implementation of loss functions + to be used during Model training.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class provides an abstract implementation of loss functions + to be used during Model training.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class defines an abstract neural network operation with forward and + backpropagation capabilities.
      +
      + +
      +
      Provides an interface for training tensors.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      Provides an interface for training tensors.
      +
      +
      +
      +
    • +
    • +
      + +
      +
      Class
      +
      Description
      + +
      +
      This class defines an abstract neural network operation with forward and + backpropagation capabilities.
      +
      +
      +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/Max.html b/docs/javadoc/mklab/JGNN/nn/pooling/Max.html new file mode 100644 index 00000000..ac5c48d5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/Max.html @@ -0,0 +1,157 @@ + + + + +Max + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Max

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.pooling.Max
    +
    +
    +
    +
    +
    public class Max +extends NNOperation
    +
    Implements a NNOperation that performs row-wise or column-wise + maximum reduction on vector tensors or matrices.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Max

        +
        public Max()
        +
        +
      • +
      • +
        +

        Max

        +
        public Max(boolean colMode)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/Mean.html b/docs/javadoc/mklab/JGNN/nn/pooling/Mean.html new file mode 100644 index 00000000..5545f930 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/Mean.html @@ -0,0 +1,188 @@ + + + + +Mean + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Mean

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.pooling.Mean
    +
    +
    +
    +
    +
    public class Mean +extends NNOperation
    +
    Implements a NNOperation that performs row-wise or column-wise + mean reduction on vector tensors or matrices.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Mean

        +
        public Mean()
        +
        +
      • +
      • +
        +

        Mean

        +
        public Mean(boolean colMode)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      + +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/SoftMax.html b/docs/javadoc/mklab/JGNN/nn/pooling/SoftMax.html new file mode 100644 index 00000000..f30ffd43 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/SoftMax.html @@ -0,0 +1,157 @@ + + + + +SoftMax + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class SoftMax

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.pooling.SoftMax
    +
    +
    +
    +
    +
    public class SoftMax +extends NNOperation
    +
    Implements a NNOperation that performs row-wise or column-wise + softmax on vector tensors or matrices.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        SoftMax

        +
        public SoftMax()
        +
        +
      • +
      • +
        +

        SoftMax

        +
        public SoftMax(boolean colMode)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/Sort.html b/docs/javadoc/mklab/JGNN/nn/pooling/Sort.html new file mode 100644 index 00000000..5d7c9330 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/Sort.html @@ -0,0 +1,174 @@ + + + + +Sort + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Sort

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.pooling.Sort
    +
    +
    +
    +
    +
    public class Sort +extends NNOperation
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Sort

        +
        public Sort(int k)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      +
        +
      • +
        +

        setDimensionName

        +
        public Sort setDimensionName(String name)
        +
        +
      • +
      +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/Sum.html b/docs/javadoc/mklab/JGNN/nn/pooling/Sum.html new file mode 100644 index 00000000..3958cd72 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/Sum.html @@ -0,0 +1,188 @@ + + + + +Sum + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + +

    Class Sum

    +
    +
    java.lang.Object +
    mklab.JGNN.nn.NNOperation +
    mklab.JGNN.nn.pooling.Sum
    +
    +
    +
    +
    +
    public class Sum +extends NNOperation
    +
    Implements a NNOperation that performs row-wise or column-wise + sum reduction on vector tensors or matrices.
    +
    +
    Author:
    +
    Emmanouil Krasanakis
    +
    +
    +
    + +
    +
    +
      + +
    • +
      +

      Constructor Details

      +
        +
      • +
        +

        Sum

        +
        public Sum()
        +
        +
      • +
      • +
        +

        Sum

        +
        public Sum(boolean colMode)
        +
        +
      • +
      +
      +
    • + +
    • +
      +

      Method Details

      + +
      +
    • +
    +
    + +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Max.html b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Max.html new file mode 100644 index 00000000..4d1b214b --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Max.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.pooling.Max + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.pooling.Max

    +
    +No usage of mklab.JGNN.nn.pooling.Max
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Mean.html b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Mean.html new file mode 100644 index 00000000..f790f080 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Mean.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.pooling.Mean + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.pooling.Mean

    +
    +No usage of mklab.JGNN.nn.pooling.Mean
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/class-use/SoftMax.html b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/SoftMax.html new file mode 100644 index 00000000..bb10e6b5 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/SoftMax.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.pooling.SoftMax + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.pooling.SoftMax

    +
    +No usage of mklab.JGNN.nn.pooling.SoftMax
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Sort.html b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Sort.html new file mode 100644 index 00000000..31b3507c --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Sort.html @@ -0,0 +1,84 @@ + + + + +Uses of Class mklab.JGNN.nn.pooling.Sort + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.pooling.Sort

    +
    +
    Packages that use Sort
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Sum.html b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Sum.html new file mode 100644 index 00000000..aa56129e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/class-use/Sum.html @@ -0,0 +1,59 @@ + + + + +Uses of Class mklab.JGNN.nn.pooling.Sum + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Class
    mklab.JGNN.nn.pooling.Sum

    +
    +No usage of mklab.JGNN.nn.pooling.Sum
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/package-summary.html b/docs/javadoc/mklab/JGNN/nn/pooling/package-summary.html new file mode 100644 index 00000000..f2d7237e --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/package-summary.html @@ -0,0 +1,115 @@ + + + + +mklab.JGNN.nn.pooling + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Package mklab.JGNN.nn.pooling

    +
    +
    +
    package mklab.JGNN.nn.pooling
    +
    +
      +
    • + +
    • +
    • +
      +
      Classes
      +
      +
      Class
      +
      Description
      + +
      +
      Implements a NNOperation that performs row-wise or column-wise + maximum reduction on vector tensors or matrices.
      +
      + +
      +
      Implements a NNOperation that performs row-wise or column-wise + mean reduction on vector tensors or matrices.
      +
      + +
      +
      Implements a NNOperation that performs row-wise or column-wise + softmax on vector tensors or matrices.
      +
      + +
       
      + +
      +
      Implements a NNOperation that performs row-wise or column-wise + sum reduction on vector tensors or matrices.
      +
      +
      +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/package-tree.html b/docs/javadoc/mklab/JGNN/nn/pooling/package-tree.html new file mode 100644 index 00000000..36670585 --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/package-tree.html @@ -0,0 +1,81 @@ + + + + +mklab.JGNN.nn.pooling Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Hierarchy For Package mklab.JGNN.nn.pooling

    +Package Hierarchies: + +
    +
    +

    Class Hierarchy

    +
      +
    • java.lang.Object +
        +
      • mklab.JGNN.nn.NNOperation +
          +
        • mklab.JGNN.nn.pooling.Max
        • +
        • mklab.JGNN.nn.pooling.Mean
        • +
        • mklab.JGNN.nn.pooling.SoftMax
        • +
        • mklab.JGNN.nn.pooling.Sort
        • +
        • mklab.JGNN.nn.pooling.Sum
        • +
        +
      • +
      +
    • +
    +
    +
    +
    +
    + + diff --git a/docs/javadoc/mklab/JGNN/nn/pooling/package-use.html b/docs/javadoc/mklab/JGNN/nn/pooling/package-use.html new file mode 100644 index 00000000..04d2ba2d --- /dev/null +++ b/docs/javadoc/mklab/JGNN/nn/pooling/package-use.html @@ -0,0 +1,81 @@ + + + + +Uses of Package mklab.JGNN.nn.pooling + + + + + + + + + + + + + + + +
    + +
    +
    +
    +

    Uses of Package
    mklab.JGNN.nn.pooling

    +
    +
    Packages that use mklab.JGNN.nn.pooling
    +
    +
    Package
    +
    Description
    + +
     
    +
    +
    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/module-search-index.js b/docs/javadoc/module-search-index.js new file mode 100644 index 00000000..0d59754f --- /dev/null +++ b/docs/javadoc/module-search-index.js @@ -0,0 +1 @@ +moduleSearchIndex = [];updateSearchResults(); \ No newline at end of file diff --git a/docs/javadoc/overview-summary.html b/docs/javadoc/overview-summary.html new file mode 100644 index 00000000..724af739 --- /dev/null +++ b/docs/javadoc/overview-summary.html @@ -0,0 +1,26 @@ + + + + +Generated Documentation (Untitled) + + + + + + + + + + + +
    + +

    index.html

    +
    + + diff --git a/docs/javadoc/overview-tree.html b/docs/javadoc/overview-tree.html new file mode 100644 index 00000000..efe46a9e --- /dev/null +++ b/docs/javadoc/overview-tree.html @@ -0,0 +1,217 @@ + + + + +Class Hierarchy + + + + + + + + + + + + + + + +
    + +
    +
    + +
    +

    Class Hierarchy

    + +
    +
    +

    Interface Hierarchy

    + +
    +
    +
    +
    + + diff --git a/docs/javadoc/package-search-index.js b/docs/javadoc/package-search-index.js new file mode 100644 index 00000000..49d10b8e --- /dev/null +++ b/docs/javadoc/package-search-index.js @@ -0,0 +1 @@ +packageSearchIndex = [{"l":"All Packages","u":"allpackages-index.html"},{"l":"mklab.JGNN.adhoc"},{"l":"mklab.JGNN.adhoc.datasets"},{"l":"mklab.JGNN.adhoc.parsers"},{"l":"mklab.JGNN.core.distribution"},{"l":"mklab.JGNN.core.empy"},{"l":"mklab.JGNN.core.matrix"},{"l":"mklab.JGNN.core.tensor"},{"l":"mklab.JGNN.nn"},{"l":"mklab.JGNN.nn.activations"},{"l":"mklab.JGNN.nn.initializers"},{"l":"mklab.JGNN.nn.inputs"},{"l":"mklab.JGNN.nn.loss"},{"l":"mklab.JGNN.nn.loss.report"},{"l":"mklab.JGNN.nn.operations"},{"l":"mklab.JGNN.nn.optimizers"},{"l":"mklab.JGNN.nn.pooling"}];updateSearchResults(); \ No newline at end of file diff --git a/docs/javadoc/resources/glass.png b/docs/javadoc/resources/glass.png new file mode 100644 index 00000000..a7f591f4 Binary files /dev/null and b/docs/javadoc/resources/glass.png differ diff --git a/docs/javadoc/resources/x.png b/docs/javadoc/resources/x.png new file mode 100644 index 00000000..30548a75 Binary files /dev/null and b/docs/javadoc/resources/x.png differ diff --git a/docs/javadoc/script-dir/images/ui-bg_glass_55_fbf9ee_1x400.png b/docs/javadoc/script-dir/images/ui-bg_glass_55_fbf9ee_1x400.png new file mode 100644 index 00000000..34abd18f Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-bg_glass_55_fbf9ee_1x400.png differ diff --git a/docs/javadoc/script-dir/images/ui-bg_glass_65_dadada_1x400.png b/docs/javadoc/script-dir/images/ui-bg_glass_65_dadada_1x400.png new file mode 100644 index 00000000..f058a938 Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-bg_glass_65_dadada_1x400.png differ diff --git a/docs/javadoc/script-dir/images/ui-bg_glass_75_dadada_1x400.png b/docs/javadoc/script-dir/images/ui-bg_glass_75_dadada_1x400.png new file mode 100644 index 00000000..2ce04c16 Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-bg_glass_75_dadada_1x400.png differ diff --git a/docs/javadoc/script-dir/images/ui-bg_glass_75_e6e6e6_1x400.png b/docs/javadoc/script-dir/images/ui-bg_glass_75_e6e6e6_1x400.png new file mode 100644 index 00000000..a90afb8b Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-bg_glass_75_e6e6e6_1x400.png differ diff --git a/docs/javadoc/script-dir/images/ui-bg_glass_95_fef1ec_1x400.png b/docs/javadoc/script-dir/images/ui-bg_glass_95_fef1ec_1x400.png new file mode 100644 index 00000000..dbe091f6 Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-bg_glass_95_fef1ec_1x400.png differ diff --git a/docs/javadoc/script-dir/images/ui-bg_highlight-soft_75_cccccc_1x100.png b/docs/javadoc/script-dir/images/ui-bg_highlight-soft_75_cccccc_1x100.png new file mode 100644 index 00000000..5dc3593e Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-bg_highlight-soft_75_cccccc_1x100.png differ diff --git a/docs/javadoc/script-dir/images/ui-icons_222222_256x240.png b/docs/javadoc/script-dir/images/ui-icons_222222_256x240.png new file mode 100644 index 00000000..e723e17c Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-icons_222222_256x240.png differ diff --git a/docs/javadoc/script-dir/images/ui-icons_2e83ff_256x240.png b/docs/javadoc/script-dir/images/ui-icons_2e83ff_256x240.png new file mode 100644 index 00000000..1f5f4975 Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-icons_2e83ff_256x240.png differ diff --git a/docs/javadoc/script-dir/images/ui-icons_454545_256x240.png b/docs/javadoc/script-dir/images/ui-icons_454545_256x240.png new file mode 100644 index 00000000..618f5b0c Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-icons_454545_256x240.png differ diff --git a/docs/javadoc/script-dir/images/ui-icons_888888_256x240.png b/docs/javadoc/script-dir/images/ui-icons_888888_256x240.png new file mode 100644 index 00000000..ee5e33f2 Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-icons_888888_256x240.png differ diff --git a/docs/javadoc/script-dir/images/ui-icons_cd0a0a_256x240.png b/docs/javadoc/script-dir/images/ui-icons_cd0a0a_256x240.png new file mode 100644 index 00000000..7e8ebc18 Binary files /dev/null and b/docs/javadoc/script-dir/images/ui-icons_cd0a0a_256x240.png differ diff --git a/docs/javadoc/script-dir/jquery-3.5.1.min.js b/docs/javadoc/script-dir/jquery-3.5.1.min.js new file mode 100644 index 00000000..b0614034 --- /dev/null +++ b/docs/javadoc/script-dir/jquery-3.5.1.min.js @@ -0,0 +1,2 @@ +/*! jQuery v3.5.1 | (c) JS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.5.1",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function D(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||j,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,j=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
    "],col:[2,"","
    "],tr:[2,"","
    "],td:[3,"","
    "],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function qe(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function Le(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function He(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Oe(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var Ut,Xt=[],Vt=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=Xt.pop()||S.expando+"_"+Ct.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Vt.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Vt.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Vt,"$1"+r):!1!==e.jsonp&&(e.url+=(Et.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,Xt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((Ut=E.implementation.createHTMLDocument("").body).innerHTML="
    ",2===Ut.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):("number"==typeof f.top&&(f.top+="px"),"number"==typeof f.left&&(f.left+="px"),c.css(f))}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=$e(y.pixelPosition,function(e,t){if(t)return t=Be(e,n),Me.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0a;a++)for(s in o[a])n=o[a][s],o[a].hasOwnProperty(s)&&void 0!==n&&(e[s]=t.isPlainObject(n)?t.isPlainObject(e[s])?t.widget.extend({},e[s],n):t.widget.extend({},n):n);return e},t.widget.bridge=function(e,s){var n=s.prototype.widgetFullName||e;t.fn[e]=function(o){var a="string"==typeof o,r=i.call(arguments,1),l=this;return a?this.length||"instance"!==o?this.each(function(){var i,s=t.data(this,n);return"instance"===o?(l=s,!1):s?t.isFunction(s[o])&&"_"!==o.charAt(0)?(i=s[o].apply(s,r),i!==s&&void 0!==i?(l=i&&i.jquery?l.pushStack(i.get()):i,!1):void 0):t.error("no such method '"+o+"' for "+e+" widget instance"):t.error("cannot call methods on "+e+" prior to initialization; "+"attempted to call method '"+o+"'")}):l=void 0:(r.length&&(o=t.widget.extend.apply(null,[o].concat(r))),this.each(function(){var e=t.data(this,n);e?(e.option(o||{}),e._init&&e._init()):t.data(this,n,new s(o,this))})),l}},t.Widget=function(){},t.Widget._childConstructors=[],t.Widget.prototype={widgetName:"widget",widgetEventPrefix:"",defaultElement:"
    ",options:{classes:{},disabled:!1,create:null},_createWidget:function(i,s){s=t(s||this.defaultElement||this)[0],this.element=t(s),this.uuid=e++,this.eventNamespace="."+this.widgetName+this.uuid,this.bindings=t(),this.hoverable=t(),this.focusable=t(),this.classesElementLookup={},s!==this&&(t.data(s,this.widgetFullName,this),this._on(!0,this.element,{remove:function(t){t.target===s&&this.destroy()}}),this.document=t(s.style?s.ownerDocument:s.document||s),this.window=t(this.document[0].defaultView||this.document[0].parentWindow)),this.options=t.widget.extend({},this.options,this._getCreateOptions(),i),this._create(),this.options.disabled&&this._setOptionDisabled(this.options.disabled),this._trigger("create",null,this._getCreateEventData()),this._init()},_getCreateOptions:function(){return{}},_getCreateEventData:t.noop,_create:t.noop,_init:t.noop,destroy:function(){var e=this;this._destroy(),t.each(this.classesElementLookup,function(t,i){e._removeClass(i,t)}),this.element.off(this.eventNamespace).removeData(this.widgetFullName),this.widget().off(this.eventNamespace).removeAttr("aria-disabled"),this.bindings.off(this.eventNamespace)},_destroy:t.noop,widget:function(){return this.element},option:function(e,i){var s,n,o,a=e;if(0===arguments.length)return t.widget.extend({},this.options);if("string"==typeof e)if(a={},s=e.split("."),e=s.shift(),s.length){for(n=a[e]=t.widget.extend({},this.options[e]),o=0;s.length-1>o;o++)n[s[o]]=n[s[o]]||{},n=n[s[o]];if(e=s.pop(),1===arguments.length)return void 0===n[e]?null:n[e];n[e]=i}else{if(1===arguments.length)return void 0===this.options[e]?null:this.options[e];a[e]=i}return this._setOptions(a),this},_setOptions:function(t){var e;for(e in t)this._setOption(e,t[e]);return this},_setOption:function(t,e){return"classes"===t&&this._setOptionClasses(e),this.options[t]=e,"disabled"===t&&this._setOptionDisabled(e),this},_setOptionClasses:function(e){var i,s,n;for(i in e)n=this.classesElementLookup[i],e[i]!==this.options.classes[i]&&n&&n.length&&(s=t(n.get()),this._removeClass(n,i),s.addClass(this._classes({element:s,keys:i,classes:e,add:!0})))},_setOptionDisabled:function(t){this._toggleClass(this.widget(),this.widgetFullName+"-disabled",null,!!t),t&&(this._removeClass(this.hoverable,null,"ui-state-hover"),this._removeClass(this.focusable,null,"ui-state-focus"))},enable:function(){return this._setOptions({disabled:!1})},disable:function(){return this._setOptions({disabled:!0})},_classes:function(e){function i(i,o){var a,r;for(r=0;i.length>r;r++)a=n.classesElementLookup[i[r]]||t(),a=e.add?t(t.unique(a.get().concat(e.element.get()))):t(a.not(e.element).get()),n.classesElementLookup[i[r]]=a,s.push(i[r]),o&&e.classes[i[r]]&&s.push(e.classes[i[r]])}var s=[],n=this;return e=t.extend({element:this.element,classes:this.options.classes||{}},e),this._on(e.element,{remove:"_untrackClassesElement"}),e.keys&&i(e.keys.match(/\S+/g)||[],!0),e.extra&&i(e.extra.match(/\S+/g)||[]),s.join(" ")},_untrackClassesElement:function(e){var i=this;t.each(i.classesElementLookup,function(s,n){-1!==t.inArray(e.target,n)&&(i.classesElementLookup[s]=t(n.not(e.target).get()))})},_removeClass:function(t,e,i){return this._toggleClass(t,e,i,!1)},_addClass:function(t,e,i){return this._toggleClass(t,e,i,!0)},_toggleClass:function(t,e,i,s){s="boolean"==typeof s?s:i;var n="string"==typeof t||null===t,o={extra:n?e:i,keys:n?t:e,element:n?this.element:t,add:s};return o.element.toggleClass(this._classes(o),s),this},_on:function(e,i,s){var n,o=this;"boolean"!=typeof e&&(s=i,i=e,e=!1),s?(i=n=t(i),this.bindings=this.bindings.add(i)):(s=i,i=this.element,n=this.widget()),t.each(s,function(s,a){function r(){return e||o.options.disabled!==!0&&!t(this).hasClass("ui-state-disabled")?("string"==typeof a?o[a]:a).apply(o,arguments):void 0}"string"!=typeof a&&(r.guid=a.guid=a.guid||r.guid||t.guid++);var l=s.match(/^([\w:-]*)\s*(.*)$/),h=l[1]+o.eventNamespace,c=l[2];c?n.on(h,c,r):i.on(h,r)})},_off:function(e,i){i=(i||"").split(" ").join(this.eventNamespace+" ")+this.eventNamespace,e.off(i).off(i),this.bindings=t(this.bindings.not(e).get()),this.focusable=t(this.focusable.not(e).get()),this.hoverable=t(this.hoverable.not(e).get())},_delay:function(t,e){function i(){return("string"==typeof t?s[t]:t).apply(s,arguments)}var s=this;return setTimeout(i,e||0)},_hoverable:function(e){this.hoverable=this.hoverable.add(e),this._on(e,{mouseenter:function(e){this._addClass(t(e.currentTarget),null,"ui-state-hover")},mouseleave:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-hover")}})},_focusable:function(e){this.focusable=this.focusable.add(e),this._on(e,{focusin:function(e){this._addClass(t(e.currentTarget),null,"ui-state-focus")},focusout:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-focus")}})},_trigger:function(e,i,s){var n,o,a=this.options[e];if(s=s||{},i=t.Event(i),i.type=(e===this.widgetEventPrefix?e:this.widgetEventPrefix+e).toLowerCase(),i.target=this.element[0],o=i.originalEvent)for(n in o)n in i||(i[n]=o[n]);return this.element.trigger(i,s),!(t.isFunction(a)&&a.apply(this.element[0],[i].concat(s))===!1||i.isDefaultPrevented())}},t.each({show:"fadeIn",hide:"fadeOut"},function(e,i){t.Widget.prototype["_"+e]=function(s,n,o){"string"==typeof n&&(n={effect:n});var a,r=n?n===!0||"number"==typeof n?i:n.effect||i:e;n=n||{},"number"==typeof n&&(n={duration:n}),a=!t.isEmptyObject(n),n.complete=o,n.delay&&s.delay(n.delay),a&&t.effects&&t.effects.effect[r]?s[e](n):r!==e&&s[r]?s[r](n.duration,n.easing,o):s.queue(function(i){t(this)[e](),o&&o.call(s[0]),i()})}}),t.widget,function(){function e(t,e,i){return[parseFloat(t[0])*(u.test(t[0])?e/100:1),parseFloat(t[1])*(u.test(t[1])?i/100:1)]}function i(e,i){return parseInt(t.css(e,i),10)||0}function s(e){var i=e[0];return 9===i.nodeType?{width:e.width(),height:e.height(),offset:{top:0,left:0}}:t.isWindow(i)?{width:e.width(),height:e.height(),offset:{top:e.scrollTop(),left:e.scrollLeft()}}:i.preventDefault?{width:0,height:0,offset:{top:i.pageY,left:i.pageX}}:{width:e.outerWidth(),height:e.outerHeight(),offset:e.offset()}}var n,o=Math.max,a=Math.abs,r=/left|center|right/,l=/top|center|bottom/,h=/[\+\-]\d+(\.[\d]+)?%?/,c=/^\w+/,u=/%$/,d=t.fn.position;t.position={scrollbarWidth:function(){if(void 0!==n)return n;var e,i,s=t("
    "),o=s.children()[0];return t("body").append(s),e=o.offsetWidth,s.css("overflow","scroll"),i=o.offsetWidth,e===i&&(i=s[0].clientWidth),s.remove(),n=e-i},getScrollInfo:function(e){var i=e.isWindow||e.isDocument?"":e.element.css("overflow-x"),s=e.isWindow||e.isDocument?"":e.element.css("overflow-y"),n="scroll"===i||"auto"===i&&e.widthi?"left":e>0?"right":"center",vertical:0>r?"top":s>0?"bottom":"middle"};h>p&&p>a(e+i)&&(u.horizontal="center"),c>f&&f>a(s+r)&&(u.vertical="middle"),u.important=o(a(e),a(i))>o(a(s),a(r))?"horizontal":"vertical",n.using.call(this,t,u)}),l.offset(t.extend(D,{using:r}))})},t.ui.position={fit:{left:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollLeft:s.offset.left,a=s.width,r=t.left-e.collisionPosition.marginLeft,l=n-r,h=r+e.collisionWidth-a-n;e.collisionWidth>a?l>0&&0>=h?(i=t.left+l+e.collisionWidth-a-n,t.left+=l-i):t.left=h>0&&0>=l?n:l>h?n+a-e.collisionWidth:n:l>0?t.left+=l:h>0?t.left-=h:t.left=o(t.left-r,t.left)},top:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollTop:s.offset.top,a=e.within.height,r=t.top-e.collisionPosition.marginTop,l=n-r,h=r+e.collisionHeight-a-n;e.collisionHeight>a?l>0&&0>=h?(i=t.top+l+e.collisionHeight-a-n,t.top+=l-i):t.top=h>0&&0>=l?n:l>h?n+a-e.collisionHeight:n:l>0?t.top+=l:h>0?t.top-=h:t.top=o(t.top-r,t.top)}},flip:{left:function(t,e){var i,s,n=e.within,o=n.offset.left+n.scrollLeft,r=n.width,l=n.isWindow?n.scrollLeft:n.offset.left,h=t.left-e.collisionPosition.marginLeft,c=h-l,u=h+e.collisionWidth-r-l,d="left"===e.my[0]?-e.elemWidth:"right"===e.my[0]?e.elemWidth:0,p="left"===e.at[0]?e.targetWidth:"right"===e.at[0]?-e.targetWidth:0,f=-2*e.offset[0];0>c?(i=t.left+d+p+f+e.collisionWidth-r-o,(0>i||a(c)>i)&&(t.left+=d+p+f)):u>0&&(s=t.left-e.collisionPosition.marginLeft+d+p+f-l,(s>0||u>a(s))&&(t.left+=d+p+f))},top:function(t,e){var i,s,n=e.within,o=n.offset.top+n.scrollTop,r=n.height,l=n.isWindow?n.scrollTop:n.offset.top,h=t.top-e.collisionPosition.marginTop,c=h-l,u=h+e.collisionHeight-r-l,d="top"===e.my[1],p=d?-e.elemHeight:"bottom"===e.my[1]?e.elemHeight:0,f="top"===e.at[1]?e.targetHeight:"bottom"===e.at[1]?-e.targetHeight:0,g=-2*e.offset[1];0>c?(s=t.top+p+f+g+e.collisionHeight-r-o,(0>s||a(c)>s)&&(t.top+=p+f+g)):u>0&&(i=t.top-e.collisionPosition.marginTop+p+f+g-l,(i>0||u>a(i))&&(t.top+=p+f+g))}},flipfit:{left:function(){t.ui.position.flip.left.apply(this,arguments),t.ui.position.fit.left.apply(this,arguments)},top:function(){t.ui.position.flip.top.apply(this,arguments),t.ui.position.fit.top.apply(this,arguments)}}}}(),t.ui.position,t.ui.keyCode={BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38},t.fn.extend({uniqueId:function(){var t=0;return function(){return this.each(function(){this.id||(this.id="ui-id-"+ ++t)})}}(),removeUniqueId:function(){return this.each(function(){/^ui-id-\d+$/.test(this.id)&&t(this).removeAttr("id")})}}),t.ui.safeActiveElement=function(t){var e;try{e=t.activeElement}catch(i){e=t.body}return e||(e=t.body),e.nodeName||(e=t.body),e},t.widget("ui.menu",{version:"1.12.1",defaultElement:"
      ",delay:300,options:{icons:{submenu:"ui-icon-caret-1-e"},items:"> *",menus:"ul",position:{my:"left top",at:"right top"},role:"menu",blur:null,focus:null,select:null},_create:function(){this.activeMenu=this.element,this.mouseHandled=!1,this.element.uniqueId().attr({role:this.options.role,tabIndex:0}),this._addClass("ui-menu","ui-widget ui-widget-content"),this._on({"mousedown .ui-menu-item":function(t){t.preventDefault()},"click .ui-menu-item":function(e){var i=t(e.target),s=t(t.ui.safeActiveElement(this.document[0]));!this.mouseHandled&&i.not(".ui-state-disabled").length&&(this.select(e),e.isPropagationStopped()||(this.mouseHandled=!0),i.has(".ui-menu").length?this.expand(e):!this.element.is(":focus")&&s.closest(".ui-menu").length&&(this.element.trigger("focus",[!0]),this.active&&1===this.active.parents(".ui-menu").length&&clearTimeout(this.timer)))},"mouseenter .ui-menu-item":function(e){if(!this.previousFilter){var i=t(e.target).closest(".ui-menu-item"),s=t(e.currentTarget);i[0]===s[0]&&(this._removeClass(s.siblings().children(".ui-state-active"),null,"ui-state-active"),this.focus(e,s))}},mouseleave:"collapseAll","mouseleave .ui-menu":"collapseAll",focus:function(t,e){var i=this.active||this.element.find(this.options.items).eq(0);e||this.focus(t,i)},blur:function(e){this._delay(function(){var i=!t.contains(this.element[0],t.ui.safeActiveElement(this.document[0]));i&&this.collapseAll(e)})},keydown:"_keydown"}),this.refresh(),this._on(this.document,{click:function(t){this._closeOnDocumentClick(t)&&this.collapseAll(t),this.mouseHandled=!1}})},_destroy:function(){var e=this.element.find(".ui-menu-item").removeAttr("role aria-disabled"),i=e.children(".ui-menu-item-wrapper").removeUniqueId().removeAttr("tabIndex role aria-haspopup");this.element.removeAttr("aria-activedescendant").find(".ui-menu").addBack().removeAttr("role aria-labelledby aria-expanded aria-hidden aria-disabled tabIndex").removeUniqueId().show(),i.children().each(function(){var e=t(this);e.data("ui-menu-submenu-caret")&&e.remove()})},_keydown:function(e){var i,s,n,o,a=!0;switch(e.keyCode){case t.ui.keyCode.PAGE_UP:this.previousPage(e);break;case t.ui.keyCode.PAGE_DOWN:this.nextPage(e);break;case t.ui.keyCode.HOME:this._move("first","first",e);break;case t.ui.keyCode.END:this._move("last","last",e);break;case t.ui.keyCode.UP:this.previous(e);break;case t.ui.keyCode.DOWN:this.next(e);break;case t.ui.keyCode.LEFT:this.collapse(e);break;case t.ui.keyCode.RIGHT:this.active&&!this.active.is(".ui-state-disabled")&&this.expand(e);break;case t.ui.keyCode.ENTER:case t.ui.keyCode.SPACE:this._activate(e);break;case t.ui.keyCode.ESCAPE:this.collapse(e);break;default:a=!1,s=this.previousFilter||"",o=!1,n=e.keyCode>=96&&105>=e.keyCode?""+(e.keyCode-96):String.fromCharCode(e.keyCode),clearTimeout(this.filterTimer),n===s?o=!0:n=s+n,i=this._filterMenuItems(n),i=o&&-1!==i.index(this.active.next())?this.active.nextAll(".ui-menu-item"):i,i.length||(n=String.fromCharCode(e.keyCode),i=this._filterMenuItems(n)),i.length?(this.focus(e,i),this.previousFilter=n,this.filterTimer=this._delay(function(){delete this.previousFilter},1e3)):delete this.previousFilter}a&&e.preventDefault()},_activate:function(t){this.active&&!this.active.is(".ui-state-disabled")&&(this.active.children("[aria-haspopup='true']").length?this.expand(t):this.select(t))},refresh:function(){var e,i,s,n,o,a=this,r=this.options.icons.submenu,l=this.element.find(this.options.menus);this._toggleClass("ui-menu-icons",null,!!this.element.find(".ui-icon").length),s=l.filter(":not(.ui-menu)").hide().attr({role:this.options.role,"aria-hidden":"true","aria-expanded":"false"}).each(function(){var e=t(this),i=e.prev(),s=t("").data("ui-menu-submenu-caret",!0);a._addClass(s,"ui-menu-icon","ui-icon "+r),i.attr("aria-haspopup","true").prepend(s),e.attr("aria-labelledby",i.attr("id"))}),this._addClass(s,"ui-menu","ui-widget ui-widget-content ui-front"),e=l.add(this.element),i=e.find(this.options.items),i.not(".ui-menu-item").each(function(){var e=t(this);a._isDivider(e)&&a._addClass(e,"ui-menu-divider","ui-widget-content")}),n=i.not(".ui-menu-item, .ui-menu-divider"),o=n.children().not(".ui-menu").uniqueId().attr({tabIndex:-1,role:this._itemRole()}),this._addClass(n,"ui-menu-item")._addClass(o,"ui-menu-item-wrapper"),i.filter(".ui-state-disabled").attr("aria-disabled","true"),this.active&&!t.contains(this.element[0],this.active[0])&&this.blur()},_itemRole:function(){return{menu:"menuitem",listbox:"option"}[this.options.role]},_setOption:function(t,e){if("icons"===t){var i=this.element.find(".ui-menu-icon");this._removeClass(i,null,this.options.icons.submenu)._addClass(i,null,e.submenu)}this._super(t,e)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t+""),this._toggleClass(null,"ui-state-disabled",!!t)},focus:function(t,e){var i,s,n;this.blur(t,t&&"focus"===t.type),this._scrollIntoView(e),this.active=e.first(),s=this.active.children(".ui-menu-item-wrapper"),this._addClass(s,null,"ui-state-active"),this.options.role&&this.element.attr("aria-activedescendant",s.attr("id")),n=this.active.parent().closest(".ui-menu-item").children(".ui-menu-item-wrapper"),this._addClass(n,null,"ui-state-active"),t&&"keydown"===t.type?this._close():this.timer=this._delay(function(){this._close()},this.delay),i=e.children(".ui-menu"),i.length&&t&&/^mouse/.test(t.type)&&this._startOpening(i),this.activeMenu=e.parent(),this._trigger("focus",t,{item:e})},_scrollIntoView:function(e){var i,s,n,o,a,r;this._hasScroll()&&(i=parseFloat(t.css(this.activeMenu[0],"borderTopWidth"))||0,s=parseFloat(t.css(this.activeMenu[0],"paddingTop"))||0,n=e.offset().top-this.activeMenu.offset().top-i-s,o=this.activeMenu.scrollTop(),a=this.activeMenu.height(),r=e.outerHeight(),0>n?this.activeMenu.scrollTop(o+n):n+r>a&&this.activeMenu.scrollTop(o+n-a+r))},blur:function(t,e){e||clearTimeout(this.timer),this.active&&(this._removeClass(this.active.children(".ui-menu-item-wrapper"),null,"ui-state-active"),this._trigger("blur",t,{item:this.active}),this.active=null)},_startOpening:function(t){clearTimeout(this.timer),"true"===t.attr("aria-hidden")&&(this.timer=this._delay(function(){this._close(),this._open(t)},this.delay))},_open:function(e){var i=t.extend({of:this.active},this.options.position);clearTimeout(this.timer),this.element.find(".ui-menu").not(e.parents(".ui-menu")).hide().attr("aria-hidden","true"),e.show().removeAttr("aria-hidden").attr("aria-expanded","true").position(i)},collapseAll:function(e,i){clearTimeout(this.timer),this.timer=this._delay(function(){var s=i?this.element:t(e&&e.target).closest(this.element.find(".ui-menu"));s.length||(s=this.element),this._close(s),this.blur(e),this._removeClass(s.find(".ui-state-active"),null,"ui-state-active"),this.activeMenu=s},this.delay)},_close:function(t){t||(t=this.active?this.active.parent():this.element),t.find(".ui-menu").hide().attr("aria-hidden","true").attr("aria-expanded","false")},_closeOnDocumentClick:function(e){return!t(e.target).closest(".ui-menu").length},_isDivider:function(t){return!/[^\-\u2014\u2013\s]/.test(t.text())},collapse:function(t){var e=this.active&&this.active.parent().closest(".ui-menu-item",this.element);e&&e.length&&(this._close(),this.focus(t,e))},expand:function(t){var e=this.active&&this.active.children(".ui-menu ").find(this.options.items).first();e&&e.length&&(this._open(e.parent()),this._delay(function(){this.focus(t,e)}))},next:function(t){this._move("next","first",t)},previous:function(t){this._move("prev","last",t)},isFirstItem:function(){return this.active&&!this.active.prevAll(".ui-menu-item").length},isLastItem:function(){return this.active&&!this.active.nextAll(".ui-menu-item").length},_move:function(t,e,i){var s;this.active&&(s="first"===t||"last"===t?this.active["first"===t?"prevAll":"nextAll"](".ui-menu-item").eq(-1):this.active[t+"All"](".ui-menu-item").eq(0)),s&&s.length&&this.active||(s=this.activeMenu.find(this.options.items)[e]()),this.focus(i,s)},nextPage:function(e){var i,s,n;return this.active?(this.isLastItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.nextAll(".ui-menu-item").each(function(){return i=t(this),0>i.offset().top-s-n}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items)[this.active?"last":"first"]())),void 0):(this.next(e),void 0)},previousPage:function(e){var i,s,n;return this.active?(this.isFirstItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.prevAll(".ui-menu-item").each(function(){return i=t(this),i.offset().top-s+n>0}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items).first())),void 0):(this.next(e),void 0)},_hasScroll:function(){return this.element.outerHeight()",options:{appendTo:null,autoFocus:!1,delay:300,minLength:1,position:{my:"left top",at:"left bottom",collision:"none"},source:null,change:null,close:null,focus:null,open:null,response:null,search:null,select:null},requestIndex:0,pending:0,_create:function(){var e,i,s,n=this.element[0].nodeName.toLowerCase(),o="textarea"===n,a="input"===n;this.isMultiLine=o||!a&&this._isContentEditable(this.element),this.valueMethod=this.element[o||a?"val":"text"],this.isNewMenu=!0,this._addClass("ui-autocomplete-input"),this.element.attr("autocomplete","off"),this._on(this.element,{keydown:function(n){if(this.element.prop("readOnly"))return e=!0,s=!0,i=!0,void 0;e=!1,s=!1,i=!1;var o=t.ui.keyCode;switch(n.keyCode){case o.PAGE_UP:e=!0,this._move("previousPage",n);break;case o.PAGE_DOWN:e=!0,this._move("nextPage",n);break;case o.UP:e=!0,this._keyEvent("previous",n);break;case o.DOWN:e=!0,this._keyEvent("next",n);break;case o.ENTER:this.menu.active&&(e=!0,n.preventDefault(),this.menu.select(n));break;case o.TAB:this.menu.active&&this.menu.select(n);break;case o.ESCAPE:this.menu.element.is(":visible")&&(this.isMultiLine||this._value(this.term),this.close(n),n.preventDefault());break;default:i=!0,this._searchTimeout(n)}},keypress:function(s){if(e)return e=!1,(!this.isMultiLine||this.menu.element.is(":visible"))&&s.preventDefault(),void 0;if(!i){var n=t.ui.keyCode;switch(s.keyCode){case n.PAGE_UP:this._move("previousPage",s);break;case n.PAGE_DOWN:this._move("nextPage",s);break;case n.UP:this._keyEvent("previous",s);break;case n.DOWN:this._keyEvent("next",s)}}},input:function(t){return s?(s=!1,t.preventDefault(),void 0):(this._searchTimeout(t),void 0)},focus:function(){this.selectedItem=null,this.previous=this._value()},blur:function(t){return this.cancelBlur?(delete this.cancelBlur,void 0):(clearTimeout(this.searching),this.close(t),this._change(t),void 0)}}),this._initSource(),this.menu=t("
        ").appendTo(this._appendTo()).menu({role:null}).hide().menu("instance"),this._addClass(this.menu.element,"ui-autocomplete","ui-front"),this._on(this.menu.element,{mousedown:function(e){e.preventDefault(),this.cancelBlur=!0,this._delay(function(){delete this.cancelBlur,this.element[0]!==t.ui.safeActiveElement(this.document[0])&&this.element.trigger("focus")})},menufocus:function(e,i){var s,n;return this.isNewMenu&&(this.isNewMenu=!1,e.originalEvent&&/^mouse/.test(e.originalEvent.type))?(this.menu.blur(),this.document.one("mousemove",function(){t(e.target).trigger(e.originalEvent)}),void 0):(n=i.item.data("ui-autocomplete-item"),!1!==this._trigger("focus",e,{item:n})&&e.originalEvent&&/^key/.test(e.originalEvent.type)&&this._value(n.value),s=i.item.attr("aria-label")||n.value,s&&t.trim(s).length&&(this.liveRegion.children().hide(),t("
        ").text(s).appendTo(this.liveRegion)),void 0)},menuselect:function(e,i){var s=i.item.data("ui-autocomplete-item"),n=this.previous;this.element[0]!==t.ui.safeActiveElement(this.document[0])&&(this.element.trigger("focus"),this.previous=n,this._delay(function(){this.previous=n,this.selectedItem=s})),!1!==this._trigger("select",e,{item:s})&&this._value(s.value),this.term=this._value(),this.close(e),this.selectedItem=s}}),this.liveRegion=t("
        ",{role:"status","aria-live":"assertive","aria-relevant":"additions"}).appendTo(this.document[0].body),this._addClass(this.liveRegion,null,"ui-helper-hidden-accessible"),this._on(this.window,{beforeunload:function(){this.element.removeAttr("autocomplete")}})},_destroy:function(){clearTimeout(this.searching),this.element.removeAttr("autocomplete"),this.menu.element.remove(),this.liveRegion.remove()},_setOption:function(t,e){this._super(t,e),"source"===t&&this._initSource(),"appendTo"===t&&this.menu.element.appendTo(this._appendTo()),"disabled"===t&&e&&this.xhr&&this.xhr.abort()},_isEventTargetInWidget:function(e){var i=this.menu.element[0];return e.target===this.element[0]||e.target===i||t.contains(i,e.target)},_closeOnClickOutside:function(t){this._isEventTargetInWidget(t)||this.close()},_appendTo:function(){var e=this.options.appendTo;return e&&(e=e.jquery||e.nodeType?t(e):this.document.find(e).eq(0)),e&&e[0]||(e=this.element.closest(".ui-front, dialog")),e.length||(e=this.document[0].body),e},_initSource:function(){var e,i,s=this;t.isArray(this.options.source)?(e=this.options.source,this.source=function(i,s){s(t.ui.autocomplete.filter(e,i.term))}):"string"==typeof this.options.source?(i=this.options.source,this.source=function(e,n){s.xhr&&s.xhr.abort(),s.xhr=t.ajax({url:i,data:e,dataType:"json",success:function(t){n(t)},error:function(){n([])}})}):this.source=this.options.source},_searchTimeout:function(t){clearTimeout(this.searching),this.searching=this._delay(function(){var e=this.term===this._value(),i=this.menu.element.is(":visible"),s=t.altKey||t.ctrlKey||t.metaKey||t.shiftKey;(!e||e&&!i&&!s)&&(this.selectedItem=null,this.search(null,t))},this.options.delay)},search:function(t,e){return t=null!=t?t:this._value(),this.term=this._value(),t.length").append(t("
        ").text(i.label)).appendTo(e)},_move:function(t,e){return this.menu.element.is(":visible")?this.menu.isFirstItem()&&/^previous/.test(t)||this.menu.isLastItem()&&/^next/.test(t)?(this.isMultiLine||this._value(this.term),this.menu.blur(),void 0):(this.menu[t](e),void 0):(this.search(null,e),void 0)},widget:function(){return this.menu.element},_value:function(){return this.valueMethod.apply(this.element,arguments)},_keyEvent:function(t,e){(!this.isMultiLine||this.menu.element.is(":visible"))&&(this._move(t,e),e.preventDefault())},_isContentEditable:function(t){if(!t.length)return!1;var e=t.prop("contentEditable");return"inherit"===e?this._isContentEditable(t.parent()):"true"===e}}),t.extend(t.ui.autocomplete,{escapeRegex:function(t){return t.replace(/[\-\[\]{}()*+?.,\\\^$|#\s]/g,"\\$&")},filter:function(e,i){var s=RegExp(t.ui.autocomplete.escapeRegex(i),"i");return t.grep(e,function(t){return s.test(t.label||t.value||t)})}}),t.widget("ui.autocomplete",t.ui.autocomplete,{options:{messages:{noResults:"No search results.",results:function(t){return t+(t>1?" results are":" result is")+" available, use up and down arrow keys to navigate."}}},__response:function(e){var i;this._superApply(arguments),this.options.disabled||this.cancelSearch||(i=e&&e.length?this.options.messages.results(e.length):this.options.messages.noResults,this.liveRegion.children().hide(),t("
        ").text(i).appendTo(this.liveRegion))}}),t.ui.autocomplete}); \ No newline at end of file diff --git a/docs/javadoc/script-dir/jquery-ui.structure.min.css b/docs/javadoc/script-dir/jquery-ui.structure.min.css new file mode 100644 index 00000000..e8808927 --- /dev/null +++ b/docs/javadoc/script-dir/jquery-ui.structure.min.css @@ -0,0 +1,5 @@ +/*! jQuery UI - v1.12.1 - 2018-12-06 +* http://jqueryui.com +* Copyright jQuery Foundation and other contributors; Licensed MIT */ + +.ui-helper-hidden{display:none}.ui-helper-hidden-accessible{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.ui-helper-reset{margin:0;padding:0;border:0;outline:0;line-height:1.3;text-decoration:none;font-size:100%;list-style:none}.ui-helper-clearfix:before,.ui-helper-clearfix:after{content:"";display:table;border-collapse:collapse}.ui-helper-clearfix:after{clear:both}.ui-helper-zfix{width:100%;height:100%;top:0;left:0;position:absolute;opacity:0;filter:Alpha(Opacity=0)}.ui-front{z-index:100}.ui-state-disabled{cursor:default!important;pointer-events:none}.ui-icon{display:inline-block;vertical-align:middle;margin-top:-.25em;position:relative;text-indent:-99999px;overflow:hidden;background-repeat:no-repeat}.ui-widget-icon-block{left:50%;margin-left:-8px;display:block}.ui-widget-overlay{position:fixed;top:0;left:0;width:100%;height:100%}.ui-autocomplete{position:absolute;top:0;left:0;cursor:default}.ui-menu{list-style:none;padding:0;margin:0;display:block;outline:0}.ui-menu .ui-menu{position:absolute}.ui-menu .ui-menu-item{margin:0;cursor:pointer;list-style-image:url("data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7")}.ui-menu .ui-menu-item-wrapper{position:relative;padding:3px 1em 3px .4em}.ui-menu .ui-menu-divider{margin:5px 0;height:0;font-size:0;line-height:0;border-width:1px 0 0 0}.ui-menu .ui-state-focus,.ui-menu .ui-state-active{margin:-1px}.ui-menu-icons{position:relative}.ui-menu-icons .ui-menu-item-wrapper{padding-left:2em}.ui-menu .ui-icon{position:absolute;top:0;bottom:0;left:.2em;margin:auto 0}.ui-menu .ui-menu-icon{left:auto;right:0} \ No newline at end of file diff --git a/docs/javadoc/script.js b/docs/javadoc/script.js new file mode 100644 index 00000000..0765364e --- /dev/null +++ b/docs/javadoc/script.js @@ -0,0 +1,132 @@ +/* + * Copyright (c) 2013, 2020, Oracle and/or its affiliates. All rights reserved. + * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + */ + +var moduleSearchIndex; +var packageSearchIndex; +var typeSearchIndex; +var memberSearchIndex; +var tagSearchIndex; +function loadScripts(doc, tag) { + createElem(doc, tag, 'search.js'); + + createElem(doc, tag, 'module-search-index.js'); + createElem(doc, tag, 'package-search-index.js'); + createElem(doc, tag, 'type-search-index.js'); + createElem(doc, tag, 'member-search-index.js'); + createElem(doc, tag, 'tag-search-index.js'); +} + +function createElem(doc, tag, path) { + var script = doc.createElement(tag); + var scriptElement = doc.getElementsByTagName(tag)[0]; + script.src = pathtoroot + path; + scriptElement.parentNode.insertBefore(script, scriptElement); +} + +function show(tableId, selected, columns) { + if (tableId !== selected) { + document.querySelectorAll('div.' + tableId + ':not(.' + selected + ')') + .forEach(function(elem) { + elem.style.display = 'none'; + }); + } + document.querySelectorAll('div.' + selected) + .forEach(function(elem, index) { + elem.style.display = ''; + var isEvenRow = index % (columns * 2) < columns; + elem.classList.remove(isEvenRow ? oddRowColor : evenRowColor); + elem.classList.add(isEvenRow ? evenRowColor : oddRowColor); + }); + updateTabs(tableId, selected); +} + +function updateTabs(tableId, selected) { + document.querySelector('div#' + tableId +' .summary-table') + .setAttribute('aria-labelledby', selected); + document.querySelectorAll('button[id^="' + tableId + '"]') + .forEach(function(tab, index) { + if (selected === tab.id || (tableId === selected && index === 0)) { + tab.className = activeTableTab; + tab.setAttribute('aria-selected', true); + tab.setAttribute('tabindex',0); + } else { + tab.className = tableTab; + tab.setAttribute('aria-selected', false); + tab.setAttribute('tabindex',-1); + } + }); +} + +function switchTab(e) { + var selected = document.querySelector('[aria-selected=true]'); + if (selected) { + if ((e.keyCode === 37 || e.keyCode === 38) && selected.previousSibling) { + // left or up arrow key pressed: move focus to previous tab + selected.previousSibling.click(); + selected.previousSibling.focus(); + e.preventDefault(); + } else if ((e.keyCode === 39 || e.keyCode === 40) && selected.nextSibling) { + // right or down arrow key pressed: move focus to next tab + selected.nextSibling.click(); + selected.nextSibling.focus(); + e.preventDefault(); + } + } +} + +var updateSearchResults = function() {}; + +function indexFilesLoaded() { + return moduleSearchIndex + && packageSearchIndex + && typeSearchIndex + && memberSearchIndex + && tagSearchIndex; +} + +// Workaround for scroll position not being included in browser history (8249133) +document.addEventListener("DOMContentLoaded", function(e) { + var contentDiv = document.querySelector("div.flex-content"); + window.addEventListener("popstate", function(e) { + if (e.state !== null) { + contentDiv.scrollTop = e.state; + } + }); + window.addEventListener("hashchange", function(e) { + history.replaceState(contentDiv.scrollTop, document.title); + }); + contentDiv.addEventListener("scroll", function(e) { + var timeoutID; + if (!timeoutID) { + timeoutID = setTimeout(function() { + history.replaceState(contentDiv.scrollTop, document.title); + timeoutID = null; + }, 100); + } + }); + if (!location.hash) { + history.replaceState(contentDiv.scrollTop, document.title); + } +}); diff --git a/docs/javadoc/search.js b/docs/javadoc/search.js new file mode 100644 index 00000000..13aba853 --- /dev/null +++ b/docs/javadoc/search.js @@ -0,0 +1,354 @@ +/* + * Copyright (c) 2015, 2020, Oracle and/or its affiliates. All rights reserved. + * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + * + */ + +var noResult = {l: "No results found"}; +var loading = {l: "Loading search index..."}; +var catModules = "Modules"; +var catPackages = "Packages"; +var catTypes = "Classes and Interfaces"; +var catMembers = "Members"; +var catSearchTags = "Search Tags"; +var highlight = "$&"; +var searchPattern = ""; +var fallbackPattern = ""; +var RANKING_THRESHOLD = 2; +var NO_MATCH = 0xffff; +var MIN_RESULTS = 3; +var MAX_RESULTS = 500; +var UNNAMED = ""; +function escapeHtml(str) { + return str.replace(//g, ">"); +} +function getHighlightedText(item, matcher, fallbackMatcher) { + var escapedItem = escapeHtml(item); + var highlighted = escapedItem.replace(matcher, highlight); + if (highlighted === escapedItem) { + highlighted = escapedItem.replace(fallbackMatcher, highlight) + } + return highlighted; +} +function getURLPrefix(ui) { + var urlPrefix=""; + var slash = "/"; + if (ui.item.category === catModules) { + return ui.item.l + slash; + } else if (ui.item.category === catPackages && ui.item.m) { + return ui.item.m + slash; + } else if (ui.item.category === catTypes || ui.item.category === catMembers) { + if (ui.item.m) { + urlPrefix = ui.item.m + slash; + } else { + $.each(packageSearchIndex, function(index, item) { + if (item.m && ui.item.p === item.l) { + urlPrefix = item.m + slash; + } + }); + } + } + return urlPrefix; +} +function createSearchPattern(term) { + var pattern = ""; + var isWordToken = false; + term.replace(/,\s*/g, ", ").trim().split(/\s+/).forEach(function(w, index) { + if (index > 0) { + // whitespace between identifiers is significant + pattern += (isWordToken && /^\w/.test(w)) ? "\\s+" : "\\s*"; + } + var tokens = w.split(/(?=[A-Z,.()<>[\/])/); + for (var i = 0; i < tokens.length; i++) { + var s = tokens[i]; + if (s === "") { + continue; + } + pattern += $.ui.autocomplete.escapeRegex(s); + isWordToken = /\w$/.test(s); + if (isWordToken) { + pattern += "([a-z0-9_$<>\\[\\]]*?)"; + } + } + }); + return pattern; +} +function createMatcher(pattern, flags) { + var isCamelCase = /[A-Z]/.test(pattern); + return new RegExp(pattern, flags + (isCamelCase ? "" : "i")); +} +var watermark = 'Search'; +$(function() { + var search = $("#search-input"); + var reset = $("#reset-button"); + search.val(''); + search.prop("disabled", false); + reset.prop("disabled", false); + search.val(watermark).addClass('watermark'); + search.blur(function() { + if ($(this).val().length === 0) { + $(this).val(watermark).addClass('watermark'); + } + }); + search.on('click keydown paste', function() { + if ($(this).val() === watermark) { + $(this).val('').removeClass('watermark'); + } + }); + reset.click(function() { + search.val('').focus(); + }); + search.focus()[0].setSelectionRange(0, 0); +}); +$.widget("custom.catcomplete", $.ui.autocomplete, { + _create: function() { + this._super(); + this.widget().menu("option", "items", "> :not(.ui-autocomplete-category)"); + }, + _renderMenu: function(ul, items) { + var rMenu = this; + var currentCategory = ""; + rMenu.menu.bindings = $(); + $.each(items, function(index, item) { + var li; + if (item.category && item.category !== currentCategory) { + ul.append("
      • " + item.category + "
      • "); + currentCategory = item.category; + } + li = rMenu._renderItemData(ul, item); + if (item.category) { + li.attr("aria-label", item.category + " : " + item.l); + li.attr("class", "result-item"); + } else { + li.attr("aria-label", item.l); + li.attr("class", "result-item"); + } + }); + }, + _renderItem: function(ul, item) { + var label = ""; + var matcher = createMatcher(escapeHtml(searchPattern), "g"); + var fallbackMatcher = new RegExp(fallbackPattern, "gi") + if (item.category === catModules) { + label = getHighlightedText(item.l, matcher, fallbackMatcher); + } else if (item.category === catPackages) { + label = getHighlightedText(item.l, matcher, fallbackMatcher); + } else if (item.category === catTypes) { + label = (item.p && item.p !== UNNAMED) + ? getHighlightedText(item.p + "." + item.l, matcher, fallbackMatcher) + : getHighlightedText(item.l, matcher, fallbackMatcher); + } else if (item.category === catMembers) { + label = (item.p && item.p !== UNNAMED) + ? getHighlightedText(item.p + "." + item.c + "." + item.l, matcher, fallbackMatcher) + : getHighlightedText(item.c + "." + item.l, matcher, fallbackMatcher); + } else if (item.category === catSearchTags) { + label = getHighlightedText(item.l, matcher, fallbackMatcher); + } else { + label = item.l; + } + var li = $("
      • ").appendTo(ul); + var div = $("
        ").appendTo(li); + if (item.category === catSearchTags && item.h) { + if (item.d) { + div.html(label + " (" + item.h + ")
        " + + item.d + "
        "); + } else { + div.html(label + " (" + item.h + ")"); + } + } else { + if (item.m) { + div.html(item.m + "/" + label); + } else { + div.html(label); + } + } + return li; + } +}); +function rankMatch(match, category) { + if (!match) { + return NO_MATCH; + } + var index = match.index; + var input = match.input; + var leftBoundaryMatch = 2; + var periferalMatch = 0; + // make sure match is anchored on a left word boundary + if (index === 0 || /\W/.test(input[index - 1]) || "_" === input[index]) { + leftBoundaryMatch = 0; + } else if ("_" === input[index - 1] || (input[index] === input[index].toUpperCase() && !/^[A-Z0-9_$]+$/.test(input))) { + leftBoundaryMatch = 1; + } + var matchEnd = index + match[0].length; + var leftParen = input.indexOf("("); + var endOfName = leftParen > -1 ? leftParen : input.length; + // exclude peripheral matches + if (category !== catModules && category !== catSearchTags) { + var delim = category === catPackages ? "/" : "."; + if (leftParen > -1 && leftParen < index) { + periferalMatch += 2; + } else if (input.lastIndexOf(delim, endOfName) >= matchEnd) { + periferalMatch += 2; + } + } + var delta = match[0].length === endOfName ? 0 : 1; // rank full match higher than partial match + for (var i = 1; i < match.length; i++) { + // lower ranking if parts of the name are missing + if (match[i]) + delta += match[i].length; + } + if (category === catTypes) { + // lower ranking if a type name contains unmatched camel-case parts + if (/[A-Z]/.test(input.substring(matchEnd))) + delta += 5; + if (/[A-Z]/.test(input.substring(0, index))) + delta += 5; + } + return leftBoundaryMatch + periferalMatch + (delta / 200); + +} +function doSearch(request, response) { + var result = []; + searchPattern = createSearchPattern(request.term); + fallbackPattern = createSearchPattern(request.term.toLowerCase()); + if (searchPattern === "") { + return this.close(); + } + var camelCaseMatcher = createMatcher(searchPattern, ""); + var fallbackMatcher = new RegExp(fallbackPattern, "i"); + + function searchIndexWithMatcher(indexArray, matcher, category, nameFunc) { + if (indexArray) { + var newResults = []; + $.each(indexArray, function (i, item) { + item.category = category; + var ranking = rankMatch(matcher.exec(nameFunc(item)), category); + if (ranking < RANKING_THRESHOLD) { + newResults.push({ranking: ranking, item: item}); + } + return newResults.length <= MAX_RESULTS; + }); + return newResults.sort(function(e1, e2) { + return e1.ranking - e2.ranking; + }).map(function(e) { + return e.item; + }); + } + return []; + } + function searchIndex(indexArray, category, nameFunc) { + var primaryResults = searchIndexWithMatcher(indexArray, camelCaseMatcher, category, nameFunc); + result = result.concat(primaryResults); + if (primaryResults.length <= MIN_RESULTS && !camelCaseMatcher.ignoreCase) { + var secondaryResults = searchIndexWithMatcher(indexArray, fallbackMatcher, category, nameFunc); + result = result.concat(secondaryResults.filter(function (item) { + return primaryResults.indexOf(item) === -1; + })); + } + } + + searchIndex(moduleSearchIndex, catModules, function(item) { return item.l; }); + searchIndex(packageSearchIndex, catPackages, function(item) { + return (item.m && request.term.indexOf("/") > -1) + ? (item.m + "/" + item.l) : item.l; + }); + searchIndex(typeSearchIndex, catTypes, function(item) { + return request.term.indexOf(".") > -1 ? item.p + "." + item.l : item.l; + }); + searchIndex(memberSearchIndex, catMembers, function(item) { + return request.term.indexOf(".") > -1 + ? item.p + "." + item.c + "." + item.l : item.l; + }); + searchIndex(tagSearchIndex, catSearchTags, function(item) { return item.l; }); + + if (!indexFilesLoaded()) { + updateSearchResults = function() { + doSearch(request, response); + } + result.unshift(loading); + } else { + updateSearchResults = function() {}; + } + response(result); +} +$(function() { + $("#search-input").catcomplete({ + minLength: 1, + delay: 300, + source: doSearch, + response: function(event, ui) { + if (!ui.content.length) { + ui.content.push(noResult); + } else { + $("#search-input").empty(); + } + }, + autoFocus: true, + focus: function(event, ui) { + return false; + }, + position: { + collision: "flip" + }, + select: function(event, ui) { + if (ui.item.category) { + var url = getURLPrefix(ui); + if (ui.item.category === catModules) { + url += "module-summary.html"; + } else if (ui.item.category === catPackages) { + if (ui.item.u) { + url = ui.item.u; + } else { + url += ui.item.l.replace(/\./g, '/') + "/package-summary.html"; + } + } else if (ui.item.category === catTypes) { + if (ui.item.u) { + url = ui.item.u; + } else if (ui.item.p === UNNAMED) { + url += ui.item.l + ".html"; + } else { + url += ui.item.p.replace(/\./g, '/') + "/" + ui.item.l + ".html"; + } + } else if (ui.item.category === catMembers) { + if (ui.item.p === UNNAMED) { + url += ui.item.c + ".html" + "#"; + } else { + url += ui.item.p.replace(/\./g, '/') + "/" + ui.item.c + ".html" + "#"; + } + if (ui.item.u) { + url += ui.item.u; + } else { + url += ui.item.l; + } + } else if (ui.item.category === catSearchTags) { + url += ui.item.u; + } + if (top !== window) { + parent.classFrame.location = pathtoroot + url; + } else { + window.location.href = pathtoroot + url; + } + $("#search-input").focus(); + } + } + }); +}); diff --git a/docs/javadoc/stylesheet.css b/docs/javadoc/stylesheet.css new file mode 100644 index 00000000..836c62da --- /dev/null +++ b/docs/javadoc/stylesheet.css @@ -0,0 +1,865 @@ +/* + * Javadoc style sheet + */ + +@import url('resources/fonts/dejavu.css'); + +/* + * Styles for individual HTML elements. + * + * These are styles that are specific to individual HTML elements. Changing them affects the style of a particular + * HTML element throughout the page. + */ + +body { + background-color:#ffffff; + color:#353833; + font-family:'DejaVu Sans', Arial, Helvetica, sans-serif; + font-size:14px; + margin:0; + padding:0; + height:100%; + width:100%; +} +iframe { + margin:0; + padding:0; + height:100%; + width:100%; + overflow-y:scroll; + border:none; +} +a:link, a:visited { + text-decoration:none; + color:#4A6782; +} +a[href]:hover, a[href]:focus { + text-decoration:none; + color:#bb7a2a; +} +a[name] { + color:#353833; +} +pre { + font-family:'DejaVu Sans Mono', monospace; + font-size:14px; +} +h1 { + font-size:20px; +} +h2 { + font-size:18px; +} +h3 { + font-size:16px; +} +h4 { + font-size:15px; +} +h5 { + font-size:14px; +} +h6 { + font-size:13px; +} +ul { + list-style-type:disc; +} +code, tt { + font-family:'DejaVu Sans Mono', monospace; +} +:not(h1, h2, h3, h4, h5, h6) > code, +:not(h1, h2, h3, h4, h5, h6) > tt { + font-size:14px; + padding-top:4px; + margin-top:8px; + line-height:1.4em; +} +dt code { + font-family:'DejaVu Sans Mono', monospace; + font-size:14px; + padding-top:4px; +} +.summary-table dt code { + font-family:'DejaVu Sans Mono', monospace; + font-size:14px; + vertical-align:top; + padding-top:4px; +} +sup { + font-size:8px; +} +button { + font-family: 'DejaVu Sans', Arial, Helvetica, sans-serif; + font-size: 14px; +} +/* + * Styles for HTML generated by javadoc. + * + * These are style classes that are used by the standard doclet to generate HTML documentation. + */ + +/* + * Styles for document title and copyright. + */ +.clear { + clear:both; + height:0; + overflow:hidden; +} +.about-language { + float:right; + padding:0 21px 8px 8px; + font-size:11px; + margin-top:-9px; + height:2.9em; +} +.legal-copy { + margin-left:.5em; +} +.tab { + background-color:#0066FF; + color:#ffffff; + padding:8px; + width:5em; + font-weight:bold; +} +/* + * Styles for navigation bar. + */ +@media screen { + .flex-box { + position:fixed; + display:flex; + flex-direction:column; + height: 100%; + width: 100%; + } + .flex-header { + flex: 0 0 auto; + } + .flex-content { + flex: 1 1 auto; + overflow-y: auto; + } +} +.top-nav { + background-color:#4D7A97; + color:#FFFFFF; + float:left; + padding:0; + width:100%; + clear:right; + min-height:2.8em; + padding-top:10px; + overflow:hidden; + font-size:12px; +} +.sub-nav { + background-color:#dee3e9; + float:left; + width:100%; + overflow:hidden; + font-size:12px; +} +.sub-nav div { + clear:left; + float:left; + padding:0 0 5px 6px; + text-transform:uppercase; +} +.sub-nav .nav-list { + padding-top:5px; +} +ul.nav-list { + display:block; + margin:0 25px 0 0; + padding:0; +} +ul.sub-nav-list { + float:left; + margin:0 25px 0 0; + padding:0; +} +ul.nav-list li { + list-style:none; + float:left; + padding: 5px 6px; + text-transform:uppercase; +} +.sub-nav .nav-list-search { + float:right; + margin:0 0 0 0; + padding:5px 6px; + clear:none; +} +.nav-list-search label { + position:relative; + right:-16px; +} +ul.sub-nav-list li { + list-style:none; + float:left; + padding-top:10px; +} +.top-nav a:link, .top-nav a:active, .top-nav a:visited { + color:#FFFFFF; + text-decoration:none; + text-transform:uppercase; +} +.top-nav a:hover { + text-decoration:none; + color:#bb7a2a; + text-transform:uppercase; +} +.nav-bar-cell1-rev { + background-color:#F8981D; + color:#253441; + margin: auto 5px; +} +.skip-nav { + position:absolute; + top:auto; + left:-9999px; + overflow:hidden; +} +/* + * Hide navigation links and search box in print layout + */ +@media print { + ul.nav-list, div.sub-nav { + display:none; + } +} +/* + * Styles for page header and footer. + */ +.title { + color:#2c4557; + margin:10px 0; +} +.sub-title { + margin:5px 0 0 0; +} +.header ul { + margin:0 0 15px 0; + padding:0; +} +.header ul li, .footer ul li { + list-style:none; + font-size:13px; +} +/* + * Styles for headings. + */ +body.class-declaration-page .summary h2, +body.class-declaration-page .details h2, +body.class-use-page h2, +body.module-declaration-page .block-list h2 { + font-style: italic; + padding:0; + margin:15px 0; +} +body.class-declaration-page .summary h3, +body.class-declaration-page .details h3, +body.class-declaration-page .summary .inherited-list h2 { + background-color:#dee3e9; + border:1px solid #d0d9e0; + margin:0 0 6px -8px; + padding:7px 5px; +} +/* + * Styles for page layout containers. + */ +main { + clear:both; + padding:10px 20px; + position:relative; +} +dl.notes > dt { + font-family: 'DejaVu Sans', Arial, Helvetica, sans-serif; + font-size:12px; + font-weight:bold; + margin:10px 0 0 0; + color:#4E4E4E; +} +dl.notes > dd { + margin:5px 10px 10px 0; + font-size:14px; + font-family:'DejaVu Serif', Georgia, "Times New Roman", Times, serif; +} +dl.name-value > dt { + margin-left:1px; + font-size:1.1em; + display:inline; + font-weight:bold; +} +dl.name-value > dd { + margin:0 0 0 1px; + font-size:1.1em; + display:inline; +} +/* + * Styles for lists. + */ +li.circle { + list-style:circle; +} +ul.horizontal li { + display:inline; + font-size:0.9em; +} +div.inheritance { + margin:0; + padding:0; +} +div.inheritance div.inheritance { + margin-left:2em; +} +ul.block-list, +ul.details-list, +ul.member-list, +ul.summary-list { + margin:10px 0 10px 0; + padding:0; +} +ul.block-list > li, +ul.details-list > li, +ul.member-list > li, +ul.summary-list > li { + list-style:none; + margin-bottom:15px; + line-height:1.4; +} +.summary-table dl, .summary-table dl dt, .summary-table dl dd { + margin-top:0; + margin-bottom:1px; +} +ul.see-list, ul.see-list-long { + padding-left: 0; + list-style: none; +} +ul.see-list li { + display: inline; +} +ul.see-list li:not(:last-child):after, +ul.see-list-long li:not(:last-child):after { + content: ", "; + white-space: pre-wrap; +} +/* + * Styles for tables. + */ +.summary-table, .details-table { + width:100%; + border-spacing:0; + border-left:1px solid #EEE; + border-right:1px solid #EEE; + border-bottom:1px solid #EEE; + padding:0; +} +.caption { + position:relative; + text-align:left; + background-repeat:no-repeat; + color:#253441; + font-weight:bold; + clear:none; + overflow:hidden; + padding:0; + padding-top:10px; + padding-left:1px; + margin:0; + white-space:pre; +} +.caption a:link, .caption a:visited { + color:#1f389c; +} +.caption a:hover, +.caption a:active { + color:#FFFFFF; +} +.caption span { + white-space:nowrap; + padding-top:5px; + padding-left:12px; + padding-right:12px; + padding-bottom:7px; + display:inline-block; + float:left; + background-color:#F8981D; + border: none; + height:16px; +} +div.table-tabs { + padding:10px 0 0 1px; + margin:0; +} +div.table-tabs > button { + border: none; + cursor: pointer; + padding: 5px 12px 7px 12px; + font-weight: bold; + margin-right: 3px; +} +div.table-tabs > button.active-table-tab { + background: #F8981D; + color: #253441; +} +div.table-tabs > button.table-tab { + background: #4D7A97; + color: #FFFFFF; +} +.two-column-summary { + display: grid; + grid-template-columns: minmax(15%, max-content) minmax(15%, auto); +} +.three-column-summary { + display: grid; + grid-template-columns: minmax(10%, max-content) minmax(15%, max-content) minmax(15%, auto); +} +.four-column-summary { + display: grid; + grid-template-columns: minmax(10%, max-content) minmax(10%, max-content) minmax(10%, max-content) minmax(10%, auto); +} +@media screen and (max-width: 600px) { + .two-column-summary { + display: grid; + grid-template-columns: 1fr; + } +} +@media screen and (max-width: 800px) { + .three-column-summary { + display: grid; + grid-template-columns: minmax(10%, max-content) minmax(25%, auto); + } + .three-column-summary .col-last { + grid-column-end: span 2; + } +} +@media screen and (max-width: 1000px) { + .four-column-summary { + display: grid; + grid-template-columns: minmax(15%, max-content) minmax(15%, auto); + } +} +.summary-table > div, .details-table > div { + text-align:left; + padding: 8px 3px 3px 7px; +} +.col-first, .col-second, .col-last, .col-constructor-name, .col-summary-item-name { + vertical-align:top; + padding-right:0; + padding-top:8px; + padding-bottom:3px; +} +.table-header { + background:#dee3e9; + font-weight: bold; +} +.col-first, .col-first { + font-size:13px; +} +.col-second, .col-second, .col-last, .col-constructor-name, .col-summary-item-name, .col-last { + font-size:13px; +} +.col-first, .col-second, .col-constructor-name { + vertical-align:top; + overflow: auto; +} +.col-last { + white-space:normal; +} +.col-first a:link, .col-first a:visited, +.col-second a:link, .col-second a:visited, +.col-first a:link, .col-first a:visited, +.col-second a:link, .col-second a:visited, +.col-constructor-name a:link, .col-constructor-name a:visited, +.col-summary-item-name a:link, .col-summary-item-name a:visited, +.constant-values-container a:link, .constant-values-container a:visited, +.all-classes-container a:link, .all-classes-container a:visited, +.all-packages-container a:link, .all-packages-container a:visited { + font-weight:bold; +} +.table-sub-heading-color { + background-color:#EEEEFF; +} +.even-row-color, .even-row-color .table-header { + background-color:#FFFFFF; +} +.odd-row-color, .odd-row-color .table-header { + background-color:#EEEEEF; +} +/* + * Styles for contents. + */ +.deprecated-content { + margin:0; + padding:10px 0; +} +div.block { + font-size:14px; + font-family:'DejaVu Serif', Georgia, "Times New Roman", Times, serif; +} +.col-last div { + padding-top:0; +} +.col-last a { + padding-bottom:3px; +} +.module-signature, +.package-signature, +.type-signature, +.member-signature { + font-family:'DejaVu Sans Mono', monospace; + font-size:14px; + margin:14px 0; + white-space: pre-wrap; +} +.module-signature, +.package-signature, +.type-signature { + margin-top: 0; +} +.member-signature .type-parameters-long, +.member-signature .parameters, +.member-signature .exceptions { + display: inline-block; + vertical-align: top; + white-space: pre; +} +.member-signature .type-parameters { + white-space: normal; +} +/* + * Styles for formatting effect. + */ +.source-line-no { + color:green; + padding:0 30px 0 0; +} +h1.hidden { + visibility:hidden; + overflow:hidden; + font-size:10px; +} +.block { + display:block; + margin:0 10px 5px 0; + color:#474747; +} +.deprecated-label, .descfrm-type-label, .implementation-label, .member-name-label, .member-name-link, +.module-label-in-package, .module-label-in-type, .override-specify-label, .package-label-in-type, +.package-hierarchy-label, .type-name-label, .type-name-link, .search-tag-link, .preview-label { + font-weight:bold; +} +.deprecation-comment, .help-footnote, .preview-comment { + font-style:italic; +} +.deprecation-block { + font-size:14px; + font-family:'DejaVu Serif', Georgia, "Times New Roman", Times, serif; + border-style:solid; + border-width:thin; + border-radius:10px; + padding:10px; + margin-bottom:10px; + margin-right:10px; + display:inline-block; +} +.preview-block { + font-size:14px; + font-family:'DejaVu Serif', Georgia, "Times New Roman", Times, serif; + border-style:solid; + border-width:thin; + border-radius:10px; + padding:10px; + margin-bottom:10px; + margin-right:10px; + display:inline-block; +} +div.block div.deprecation-comment { + font-style:normal; +} +/* + * Styles specific to HTML5 elements. + */ +main, nav, header, footer, section { + display:block; +} +/* + * Styles for javadoc search. + */ +.ui-autocomplete-category { + font-weight:bold; + font-size:15px; + padding:7px 0 7px 3px; + background-color:#4D7A97; + color:#FFFFFF; +} +.result-item { + font-size:13px; +} +.ui-autocomplete { + max-height:85%; + max-width:65%; + overflow-y:scroll; + overflow-x:scroll; + white-space:nowrap; + box-shadow: 0 3px 6px rgba(0,0,0,0.16), 0 3px 6px rgba(0,0,0,0.23); +} +ul.ui-autocomplete { + position:fixed; + z-index:999999; +} +ul.ui-autocomplete li { + float:left; + clear:both; + width:100%; +} +.result-highlight { + font-weight:bold; +} +#search-input { + background-image:url('resources/glass.png'); + background-size:13px; + background-repeat:no-repeat; + background-position:2px 3px; + padding-left:20px; + position:relative; + right:-18px; + width:400px; +} +#reset-button { + background-color: rgb(255,255,255); + background-image:url('resources/x.png'); + background-position:center; + background-repeat:no-repeat; + background-size:12px; + border:0 none; + width:16px; + height:16px; + position:relative; + left:-4px; + top:-4px; + font-size:0px; +} +.watermark { + color:#545454; +} +.search-tag-desc-result { + font-style:italic; + font-size:11px; +} +.search-tag-holder-result { + font-style:italic; + font-size:12px; +} +.search-tag-result:target { + background-color:yellow; +} +.module-graph span { + display:none; + position:absolute; +} +.module-graph:hover span { + display:block; + margin: -100px 0 0 100px; + z-index: 1; +} +.inherited-list { + margin: 10px 0 10px 0; +} +section.class-description { + line-height: 1.4; +} +.summary section[class$="-summary"], .details section[class$="-details"], +.class-uses .detail, .serialized-class-details { + padding: 0px 20px 5px 10px; + border: 1px solid #ededed; + background-color: #f8f8f8; +} +.inherited-list, section[class$="-details"] .detail { + padding:0 0 5px 8px; + background-color:#ffffff; + border:none; +} +.vertical-separator { + padding: 0 5px; +} +ul.help-section-list { + margin: 0; +} +ul.help-subtoc > li { + display: inline-block; + padding-right: 5px; + font-size: smaller; +} +ul.help-subtoc > li::before { + content: "\2022" ; + padding-right:2px; +} +span.help-note { + font-style: italic; +} +/* + * Indicator icon for external links. + */ +main a[href*="://"]::after { + content:""; + display:inline-block; + background-image:url('data:image/svg+xml; utf8, \ + \ + \ + '); + background-size:100% 100%; + width:7px; + height:7px; + margin-left:2px; + margin-bottom:4px; +} +main a[href*="://"]:hover::after, +main a[href*="://"]:focus::after { + background-image:url('data:image/svg+xml; utf8, \ + \ + \ + '); +} + +/* + * Styles for user-provided tables. + * + * borderless: + * No borders, vertical margins, styled caption. + * This style is provided for use with existing doc comments. + * In general, borderless tables should not be used for layout purposes. + * + * plain: + * Plain borders around table and cells, vertical margins, styled caption. + * Best for small tables or for complex tables for tables with cells that span + * rows and columns, when the "striped" style does not work well. + * + * striped: + * Borders around the table and vertical borders between cells, striped rows, + * vertical margins, styled caption. + * Best for tables that have a header row, and a body containing a series of simple rows. + */ + +table.borderless, +table.plain, +table.striped { + margin-top: 10px; + margin-bottom: 10px; +} +table.borderless > caption, +table.plain > caption, +table.striped > caption { + font-weight: bold; + font-size: smaller; +} +table.borderless th, table.borderless td, +table.plain th, table.plain td, +table.striped th, table.striped td { + padding: 2px 5px; +} +table.borderless, +table.borderless > thead > tr > th, table.borderless > tbody > tr > th, table.borderless > tr > th, +table.borderless > thead > tr > td, table.borderless > tbody > tr > td, table.borderless > tr > td { + border: none; +} +table.borderless > thead > tr, table.borderless > tbody > tr, table.borderless > tr { + background-color: transparent; +} +table.plain { + border-collapse: collapse; + border: 1px solid black; +} +table.plain > thead > tr, table.plain > tbody tr, table.plain > tr { + background-color: transparent; +} +table.plain > thead > tr > th, table.plain > tbody > tr > th, table.plain > tr > th, +table.plain > thead > tr > td, table.plain > tbody > tr > td, table.plain > tr > td { + border: 1px solid black; +} +table.striped { + border-collapse: collapse; + border: 1px solid black; +} +table.striped > thead { + background-color: #E3E3E3; +} +table.striped > thead > tr > th, table.striped > thead > tr > td { + border: 1px solid black; +} +table.striped > tbody > tr:nth-child(even) { + background-color: #EEE +} +table.striped > tbody > tr:nth-child(odd) { + background-color: #FFF +} +table.striped > tbody > tr > th, table.striped > tbody > tr > td { + border-left: 1px solid black; + border-right: 1px solid black; +} +table.striped > tbody > tr > th { + font-weight: normal; +} +/** + * Tweak font sizes and paddings for small screens. + */ +@media screen and (max-width: 1050px) { + #search-input { + width: 300px; + } +} +@media screen and (max-width: 800px) { + #search-input { + width: 200px; + } + .top-nav, + .bottom-nav { + font-size: 11px; + padding-top: 6px; + } + .sub-nav { + font-size: 11px; + } + .about-language { + padding-right: 16px; + } + ul.nav-list li, + .sub-nav .nav-list-search { + padding: 6px; + } + ul.sub-nav-list li { + padding-top: 5px; + } + main { + padding: 10px; + } + .summary section[class$="-summary"], .details section[class$="-details"], + .class-uses .detail, .serialized-class-details { + padding: 0 8px 5px 8px; + } + body { + -webkit-text-size-adjust: none; + } +} +@media screen and (max-width: 500px) { + #search-input { + width: 150px; + } + .top-nav, + .bottom-nav { + font-size: 10px; + } + .sub-nav { + font-size: 10px; + } + .about-language { + font-size: 10px; + padding-right: 12px; + } +} diff --git a/docs/javadoc/tag-search-index.js b/docs/javadoc/tag-search-index.js new file mode 100644 index 00000000..0367dae6 --- /dev/null +++ b/docs/javadoc/tag-search-index.js @@ -0,0 +1 @@ +tagSearchIndex = [];updateSearchResults(); \ No newline at end of file diff --git a/docs/javadoc/type-search-index.js b/docs/javadoc/type-search-index.js new file mode 100644 index 00000000..85734293 --- /dev/null +++ b/docs/javadoc/type-search-index.js @@ -0,0 +1 @@ +typeSearchIndex = [{"p":"mklab.JGNN.core.matrix","l":"AccessCol"},{"p":"mklab.JGNN.core.matrix","l":"AccessRow"},{"p":"mklab.JGNN.core.tensor","l":"AccessSubtensor"},{"p":"mklab.JGNN.nn.loss","l":"Accuracy"},{"p":"mklab.JGNN.nn.optimizers","l":"Adam"},{"p":"mklab.JGNN.nn.operations","l":"Add"},{"l":"All Classes and Interfaces","u":"allclasses-index.html"},{"p":"mklab.JGNN.nn.operations","l":"Attention"},{"p":"mklab.JGNN.nn.optimizers","l":"BatchOptimizer"},{"p":"mklab.JGNN.nn.loss","l":"BinaryCrossEntropy"},{"p":"mklab.JGNN.nn.loss","l":"CategoricalCrossEntropy"},{"p":"mklab.JGNN.adhoc.datasets","l":"Citeseer"},{"p":"mklab.JGNN.core.matrix","l":"ColumnRepetition"},{"p":"mklab.JGNN.nn.operations","l":"Complement"},{"p":"mklab.JGNN.nn.operations","l":"Concat"},{"p":"mklab.JGNN.nn.inputs","l":"Constant"},{"p":"mklab.JGNN.adhoc.datasets","l":"Cora"},{"p":"mklab.JGNN.adhoc","l":"Dataset"},{"p":"mklab.JGNN.core.matrix","l":"DenseMatrix"},{"p":"mklab.JGNN.core.tensor","l":"DenseTensor"},{"p":"mklab.JGNN.core.matrix","l":"Diagonal"},{"p":"mklab.JGNN.core","l":"Distribution"},{"p":"mklab.JGNN.nn.operations","l":"Dropout"},{"p":"mklab.JGNN.core.empy","l":"EmptyMatrix"},{"p":"mklab.JGNN.core.empy","l":"EmptyTensor"},{"p":"mklab.JGNN.nn.activations","l":"Exp"},{"p":"mklab.JGNN.adhoc.parsers","l":"FastBuilder"},{"p":"mklab.JGNN.core.util","l":"FastEntry"},{"p":"mklab.JGNN.nn.operations","l":"From"},{"p":"mklab.JGNN.nn.operations","l":"Gather"},{"p":"mklab.JGNN.nn.optimizers","l":"GradientDescent"},{"p":"mklab.JGNN.adhoc","l":"IdConverter"},{"p":"mklab.JGNN.nn.operations","l":"Identity"},{"p":"mklab.JGNN.nn","l":"Initializer"},{"p":"mklab.JGNN.nn.initializers","l":"KaimingNormal"},{"p":"mklab.JGNN.nn.initializers","l":"KaimingUniform"},{"p":"mklab.JGNN.nn.activations","l":"L1"},{"p":"mklab.JGNN.adhoc.parsers","l":"LayeredBuilder"},{"p":"mklab.JGNN.nn.operations","l":"Log"},{"p":"mklab.JGNN.nn","l":"Loss"},{"p":"mklab.JGNN.core.util","l":"Loss"},{"p":"mklab.JGNN.nn.activations","l":"LRelu"},{"p":"mklab.JGNN.nn.operations","l":"LSTM"},{"p":"mklab.JGNN.nn.operations","l":"LSTM.LSTMState"},{"p":"mklab.JGNN.nn.operations","l":"MatMul"},{"p":"mklab.JGNN.core","l":"Matrix"},{"p":"mklab.JGNN.nn.pooling","l":"Max"},{"p":"mklab.JGNN.nn.pooling","l":"Mean"},{"p":"mklab.JGNN.core","l":"Memory"},{"p":"mklab.JGNN.nn","l":"Model"},{"p":"mklab.JGNN.adhoc","l":"ModelBuilder"},{"p":"mklab.JGNN.nn","l":"ModelTraining"},{"p":"mklab.JGNN.nn.operations","l":"Multiply"},{"p":"mklab.JGNN.adhoc.parsers","l":"Neuralang"},{"p":"mklab.JGNN.nn.activations","l":"NExp"},{"p":"mklab.JGNN.nn","l":"NNOperation"},{"p":"mklab.JGNN.core.distribution","l":"Normal"},{"p":"mklab.JGNN.nn","l":"Optimizer"},{"p":"mklab.JGNN.nn.inputs","l":"Parameter"},{"p":"mklab.JGNN.nn.activations","l":"PRelu"},{"p":"mklab.JGNN.adhoc.datasets","l":"Pubmed"},{"p":"mklab.JGNN.core.util","l":"Range"},{"p":"mklab.JGNN.core.util","l":"Range2D"},{"p":"mklab.JGNN.nn.operations","l":"Reduce"},{"p":"mklab.JGNN.nn.optimizers","l":"Regularization"},{"p":"mklab.JGNN.nn.activations","l":"Relu"},{"p":"mklab.JGNN.nn.operations","l":"Repeat"},{"p":"mklab.JGNN.core.matrix","l":"RepeatMatrix"},{"p":"mklab.JGNN.core.tensor","l":"RepeatTensor"},{"p":"mklab.JGNN.nn.operations","l":"Reshape"},{"p":"mklab.JGNN.core.matrix","l":"RowRepetition"},{"p":"mklab.JGNN.core","l":"Memory.Scope"},{"p":"mklab.JGNN.nn.activations","l":"Sigmoid"},{"p":"mklab.JGNN.core","l":"Slice"},{"p":"mklab.JGNN.nn.pooling","l":"SoftMax"},{"p":"mklab.JGNN.core.util","l":"Sort"},{"p":"mklab.JGNN.nn.pooling","l":"Sort"},{"p":"mklab.JGNN.core.matrix","l":"SparseMatrix"},{"p":"mklab.JGNN.core.matrix","l":"SparseSymmetric"},{"p":"mklab.JGNN.core.tensor","l":"SparseTensor"},{"p":"mklab.JGNN.nn.pooling","l":"Sum"},{"p":"mklab.JGNN.nn.activations","l":"Tanh"},{"p":"mklab.JGNN.core","l":"Tensor"},{"p":"mklab.JGNN.core","l":"ThreadPool"},{"p":"mklab.JGNN.nn.operations","l":"To"},{"p":"mklab.JGNN.nn.operations","l":"Transpose"},{"p":"mklab.JGNN.core.matrix","l":"TransposedMatrix"},{"p":"mklab.JGNN.core.distribution","l":"Uniform"},{"p":"mklab.JGNN.nn.inputs","l":"Variable"},{"p":"mklab.JGNN.nn.initializers","l":"VariancePreservingInitializer"},{"p":"mklab.JGNN.core.matrix","l":"VectorizedMatrix"},{"p":"mklab.JGNN.core.tensor","l":"VectorizedTensor"},{"p":"mklab.JGNN.nn.loss.report","l":"VerboseLoss"},{"p":"mklab.JGNN.core.matrix","l":"WrapCols"},{"p":"mklab.JGNN.core.matrix","l":"WrapRows"},{"p":"mklab.JGNN.nn.initializers","l":"XavierNormal"},{"p":"mklab.JGNN.nn.initializers","l":"XavierUniform"}];updateSearchResults(); \ No newline at end of file