Packages

  • package root
    Definition Classes
    root
  • package lamp

    Lamp provides utilities to build state of the art machine learning applications

    Lamp provides utilities to build state of the art machine learning applications

    Overview

    Notable types and packages:

    • lamp.STen is a memory managed wrapper around aten.ATen, an off the heap, native n-dimensionl array backed by libtorch.
    • lamp.autograd implements reverse mode automatic differentiation.
    • lamp.nn contains neural network building blocks, see e.g. lamp.nn.Linear.
    • lamp.data.IOLoops implements a training loop and other data related abstractions.
    • lamp.knn implements k-nearest neighbor search on the CPU and GPU
    • lamp.umap.Umap implements the UMAP dimension reduction algorithm
    • lamp.onnx implements serialization of computation graphs into ONNX format
    • lamp.io contains CSV and NPY readers
    How to get data into lamp

    Use one of the file readers in lamp.io or one of the factories in lamp.STen$.

    How to define a custom neural network layer

    See the documentation on lamp.nn.GenericModule

    How to compose neural network layers

    See the documentation on lamp.nn

    How to train models

    See the training loops in lamp.data.IOLoops

    Definition Classes
    root
  • package umap
    Definition Classes
    lamp
  • Umap

object Umap

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Umap
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native() @IntrinsicCandidate()
  6. def edgeWeights(knnDistances: Mat[Double], knn: Mat[Int]): Mat[Double]
  7. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  8. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  9. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @IntrinsicCandidate()
  10. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @IntrinsicCandidate()
  11. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  13. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @IntrinsicCandidate()
  14. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @IntrinsicCandidate()
  15. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  16. def toString(): String
    Definition Classes
    AnyRef → Any
  17. def umap(data: Mat[Double], device: Device = CPU, precision: FloatingPointPrecision = DoublePrecision, k: Int = 10, numDim: Int = 2, knnMinibatchSize: Int = 1000, lr: Double = 0.1, iterations: Int = 500, minDist: Double = 0.0d, negativeSampleSize: Int = 5, randomSeed: Long = 42L, balanceAttractionsAndRepulsions: Boolean = true, repulsionStrength: Double = 1d, logger: Option[Logger] = None, positiveSamples: Option[Int] = None): (Mat[Double], Mat[Double], Double)

    Dimension reduction similar to UMAP For reference see https://arxiv.org/abs/1802.03426 This method does not follow the above paper exactly.

    Dimension reduction similar to UMAP For reference see https://arxiv.org/abs/1802.03426 This method does not follow the above paper exactly.

    Minimizes the objective function: L(x) = L_attraction(x) + L_repulsion(x)

    L_attraction(x) = sum over (i,j) edges : b_ij * ln(f(x_i,x_j)) b_ij is the value of the 'UMAP graph' as in the above paper x_i is the low dimensional coordinate of the i-th sample f(x,y) = 1 if ||x-y||_2 < minDist , or exp(-(||x-y||_2 - minDist)) otherwise

    L_repulsion(x) = sum over (i,j) edges: (1-b_ij) * ln(1 - f(x_i,x_j)) , evaluated with sampling L_repulsion is evaluated by randomly sampling in each iteration from all (i,j) edges having b_ij=0

    Nearest neighbor search is evaluated by brute force. It may be batched, and may be evaluated on the GPU.

    L(x) is maximized by gradient descent, in particular Adam. Derivatives of L(x) are computed using reverse mode automatic differentiation (autograd). Gradient descent may be evaluated on the GPU.

    Distance metric is alway Euclidean.

    Differences to the algorithm described in the UMAP paper:

    • The paper desribes a smooth approximation of the function 'f' (Definition 11.). That approximation is not used in this code.
    • The paper describes an optimization procedure different from the approach taken here. They sample each edge according to b_ij and update the vertices one after the other. The current code updates each locations all together according to the derivative of L(x).
    data

    each row is a sample

    device

    device to run the optimization and KNN search (GPU or CPU)

    precision

    precision to run the KNN search, optimization is always in double precision

    k

    number of nearest neighbors to retrieve. Self is counted as nearest neighbor

    numDim

    number of dimensions to project to

    knnMinibatchSize

    KNN search may be batched if the device can't fit the whole distance matrix

    lr

    learning rate

    iterations

    number of epochs of optimization

    minDist

    see above equations for the definition, see the UMAP paper for its effect

    negativeSampleSize

    number of negative edges to select for each positive

    balanceAttractionsAndRepulsions

    if true the number of negative samples will not affect the relative strength of attractions and repulsions (see @param repulsionStrength)

    repulsionStrength

    strength of repulsions compared to attractions

    returns

    a triple of the layout, the umap graph (b) and the final optimization loss

  18. def umapCustomKnn(knn: Mat[Int], knnDistances: Mat[Double], device: Device = CPU, numDim: Int = 2, lr: Double = 0.1, iterations: Int = 500, minDist: Double = 0.0d, negativeSampleSize: Int = 5, randomSeed: Long = 42L, balanceAttractionsAndRepulsions: Boolean = true, repulsionStrength: Double = 1d, logger: Option[Logger] = None, positiveSamples: Option[Int] = None): (Mat[Double], Mat[Double], Double)
  19. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  20. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  21. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

Inherited from AnyRef

Inherited from Any

Ungrouped