Packages

  • package root
    Definition Classes
    root
  • package lamp

    Lamp provides utilities to build state of the art machine learning applications

    Lamp provides utilities to build state of the art machine learning applications

    Overview

    Notable types and packages:

    • lamp.STen is a memory managed wrapper around aten.ATen, an off the heap, native n-dimensionl array backed by libtorch.
    • lamp.autograd implements reverse mode automatic differentiation.
    • lamp.nn contains neural network building blocks, see e.g. lamp.nn.Linear.
    • lamp.data.IOLoops implements a training loop and other data related abstractions.
    • lamp.knn implements k-nearest neighbor search on the CPU and GPU
    • lamp.umap.Umap implements the UMAP dimension reduction algorithm
    • lamp.onnx implements serialization of computation graphs into ONNX format
    • lamp.io contains CSV and NPY readers
    How to get data into lamp

    Use one of the file readers in lamp.io or one of the factories in lamp.STen$.

    How to define a custom neural network layer

    See the documentation on lamp.nn.GenericModule

    How to compose neural network layers

    See the documentation on lamp.nn

    How to train models

    See the training loops in lamp.data.IOLoops

    Definition Classes
    root
  • package autograd

    Implements reverse mode automatic differentiaton

    Implements reverse mode automatic differentiaton

    The main types in this package are lamp.autograd.Variable and lamp.autograd.Op. The computational graph built by this package consists of vertices representing values (as lamp.autograd.Variable) and vertices representing operations (as lamp.autograd.Op).

    Variables contain the value of a Rn => Rm function. Variables may also contain the partial derivative of their argument with respect to a single scalar. A Variable whose value is a scalar (m=1) can trigger the computation of partial derivatives of all the intermediate upstream Variables. Computing partial derivatives with respect to non-scalar variables is not supported.

    A constant Variable may be created with the const or param factory method in this package. const may be used for constants which do not need their partial derivatives to be computed. param on the other hand create Variables which will fill in their partial derivatives. Further variables may be created by the methods in this class, eventually expressing more complex Rn => Rm functions.

    Example
    lamp.Scope.root{ implicit scope =>
      // x is constant (depends on no other variables) and won't compute a partial derivative
      val x = lamp.autograd.const(STen.eye(3, STenOptions.d))
      // y is constant but will compute a partial derivative
      val y = lamp.autograd.param(STen.ones(List(3,3), STenOptions.d))
    
      // z is a Variable with x and y dependencies
      val z = x+y
    
      // w is a Variable with z as a direct and x, y as transient dependencies
      val w = z.sum
      // w is a scalar (number of elements is 1), thus we can call backprop() on it.
      // calling backprop will fill out the partial derivatives of the upstream variables
      w.backprop()
    
      // partialDerivative is empty since we created `x` with `const`
      assert(x.partialDerivative.isEmpty)
    
      // `y`'s partial derivatie is defined and is computed
      // it holds `y`'s partial derivative with respect to `w`, the scalar which we called backprop() on
      assert(y.partialDerivative.isDefined)
    
    }

    This package may be used to compute the derivative of any function, provided the function can be composed out of the provided methods. A particular use case is gradient based optimization.

    Definition Classes
    lamp
    See also

    https://arxiv.org/pdf/1811.05031.pdf for a review of the algorithm

    lamp.autograd.Op for how to implement a new operation

  • package data
    Definition Classes
    lamp
  • package distributed
    Definition Classes
    lamp
  • package extratrees
    Definition Classes
    lamp
  • package knn
    Definition Classes
    lamp
  • package nn

    Provides building blocks for neural networks

    Provides building blocks for neural networks

    Notable types:

    Optimizers:

    Modules facilitating composing other modules:

    • nn.Sequential composes a homogenous list of modules (analogous to List)
    • nn.sequence composes a heterogeneous list of modules (analogous to tuples)
    • nn.EitherModule composes two modules in a scala.Either

    Examples of neural network building blocks, layers etc:

    Definition Classes
    lamp
  • package onnx
    Definition Classes
    lamp
  • package saddle
    Definition Classes
    lamp
  • package table
    Definition Classes
    lamp
  • package umap
    Definition Classes
    lamp
  • package util
    Definition Classes
    lamp
  • BufferPair
  • CPU
  • ColumnSelection
  • CudaDevice
  • Device
  • DoublePrecision
  • EmptyMovable
  • FloatingPointPrecision
  • HalfPrecision
  • MPS
  • Movable
  • NcclUniqueId
  • STen
  • STenOptions
  • Scope
  • SinglePrecision
  • TensorHelpers

object STen extends Serializable

Companion object of lamp.STen

Linear Supertypes
Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. STen
  2. Serializable
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. implicit class OwnedSyntax extends AnyRef

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addOut(out: STen, self: STen, other: STen, alpha: Double): Unit
  5. def addcdivOut(out: STen, self: STen, tensor1: STen, tensor2: STen, alpha: Double): Unit
  6. def addcmulOut(out: STen, self: STen, tensor1: STen, tensor2: STen, alpha: Double): Unit
  7. def addmmOut(out: STen, self: STen, mat1: STen, mat2: STen, beta: Double, alpha: Double): Unit
  8. def arange[S](start: Double, end: Double, step: Double, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  9. def arange_l[S](start: Long, end: Long, step: Long, tensorOptions: STenOptions = STen.lOptions)(implicit arg0: Sc[S]): STen
  10. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  11. def atan2[S](y: STen, x: STen)(implicit arg0: Sc[S]): STen
  12. val bOptions: STenOptions

    A tensor option specifying CPU and byte

  13. val bf16Options: STenOptions
  14. def bmmOut(out: STen, self: STen, other: STen): Unit
  15. def cartesianProduct[S](list: List[STen])(implicit arg0: Sc[S]): STen
  16. def cat[S](tensors: Seq[STen], dim: Long)(implicit arg0: Sc[S]): STen
  17. def catOut(out: STen, tensors: Seq[STen], dim: Int): Unit
  18. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  19. val dOptions: STenOptions

    A tensor option specifying CPU and double

  20. def divOut(out: STen, self: STen, other: STen): Unit
  21. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  22. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  23. def eye[S](n: Int, m: Int, tensorOptions: STenOptions)(implicit arg0: Sc[S]): STen
  24. def eye[S](n: Int, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  25. val fOptions: STenOptions

    A tensor option specifying CPU and float

  26. def free(value: Tensor): STen

    Wraps a tensor without registering it to any scope.

    Wraps a tensor without registering it to any scope.

    Memory may leak.

  27. def fromByteArray[S](ar: Array[Byte], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  28. def fromDoubleArray[S](ar: Array[Double], dim: Seq[Long], device: Device, precision: FloatingPointPrecision)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  29. def fromFile[S](path: String, offset: Long, length: Long, scalarTypeByte: Byte, pin: Boolean)(implicit arg0: Sc[S]): STen

    Create tensor directly from file.

    Create tensor directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

    path

    file path

    offset

    byte offset into the file. Must be page aligned (usually multiple of 4096)

    length

    byte length of the data

    scalarTypeByte

    scalar type (byte=1,short=2,int=3,long=4,half=5,float=6,double=7)

    pin

    if true the mapped segment will be page locked with mlock(2)

    returns

    tensor on CPU

  30. def fromFloatArray[S](ar: Array[Float], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  31. def fromIntArray[S](ar: Array[Int], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  32. def fromLongArray[S](ar: Array[Long])(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  33. def fromLongArray[S](ar: Array[Long], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  34. def fromLongArrayOfArrays[S](ar: Array[Array[Long]], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen
  35. def fromShortArray[S](ar: Array[Short], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  36. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  37. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  38. val iOptions: STenOptions

    A tensor option specifying CPU and int

  39. def indexCopyOut(out: STen, self: STen, dim: Int, index: STen, source: STen): Unit
  40. def indexSelectOut(out: STen, self: STen, dim: Int, index: STen): Unit
  41. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  42. val lOptions: STenOptions

    A tensor option specifying CPU and long

  43. def linspace[S](start: Double, end: Double, steps: Long, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  44. def lstsq[S](A: STen, B: STen)(implicit arg0: Sc[S]): (STen, STen, STen, STen)
  45. def meanOut(out: STen, self: STen, dim: Seq[Int], keepDim: Boolean): Unit
  46. def mmOut(out: STen, self: STen, other: STen): Unit
  47. def mse_loss[S](self: STen, target: STen, reduction: Long)(implicit arg0: Sc[S]): STen
  48. def mse_loss_backward[S](gradOutput: STen, self: STen, target: STen, reduction: Long)(implicit arg0: Sc[S]): STen
  49. def mulOut(out: STen, self: STen, other: STen): Unit
  50. def multinomial[S](probs: STen, numSamples: Int, replacement: Boolean)(implicit arg0: Sc[S]): STen
  51. def ncclBoadcast(tensors: Seq[(STen, NcclComm)]): Unit

    Broadcast tensor on root to the clique Blocks until all peers execute the broadcast.

    Broadcast tensor on root to the clique Blocks until all peers execute the broadcast. Takes a list of tensors for the case where a single thread manages multiple GPUs

  52. def ncclInitComm(nRanks: Int, myRank: Int, myDevice: Int, ncclUniqueId: NcclUniqueId): NcclComm

    Blocks until all peers join the clique.

  53. def ncclReduce(inputs: Seq[(STen, NcclComm)], output: STen, rootRank: Int): Unit

    Reduction with + Output must be on the root rank

    Reduction with + Output must be on the root rank

    Blocks until all peers execute the reduce. Takes a list of tensors for the case where a single thread manages multiple GPUs

  54. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  55. def normal[S](mean: Double, std: Double, size: Seq[Long], options: STenOptions)(implicit arg0: Sc[S]): STen
  56. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  57. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  58. def ones[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  59. def onesLike[S](tensor: STen)(implicit arg0: Sc[S]): STen
  60. def onesLike[S](tensor: Tensor)(implicit arg0: Sc[S]): STen
  61. def owned(value: Tensor)(implicit scope: Scope): STen

    Wraps an aten.Tensor and registering it to the given scope

  62. def powOut(out: STen, self: STen, other: STen): Unit
  63. def powOut(out: STen, self: STen, other: Double): Unit
  64. def rand[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  65. def randint[S](low: Long, high: Long, size: Seq[Long], tensorOptions: STenOptions)(implicit arg0: Sc[S]): STen
  66. def randint[S](high: Long, size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  67. def randn[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  68. def randperm[S](n: Long, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  69. def remainderOut(out: STen, self: STen, other: Double): Unit
  70. def remainderOut(out: STen, self: STen, other: STen): Unit
  71. def scalarDouble[S](value: Double, options: STenOptions)(implicit arg0: Sc[S]): STen
  72. def scalarLong(value: Long, options: STenOptions)(implicit scope: Scope): STen
  73. def scaledDotProductAttention[S](query: STen, key: STen, value: STen, isCausal: Boolean)(implicit arg0: Sc[S]): (STen, STen)
  74. def scaledDotProductAttentionBackward[S](gradOutput: STen, query: STen, key: STen, value: STen, out: STen, logsumexp: STen, isCausal: Boolean)(implicit arg0: Sc[S]): (STen, STen, STen)
  75. val shOptions: STenOptions

    A tensor option specifying CPU and short

  76. def smooth_l1_loss_backward[S](gradOutput: STen, self: STen, target: STen, reduction: Long, beta: Double)(implicit arg0: Sc[S]): STen
  77. def softplus_backward[S](gradOutput: STen, self: STen, beta: Double, threshold: Double)(implicit arg0: Sc[S]): STen
  78. def sparse_coo[S](indices: STen, values: STen, dim: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  79. def stack[S](tensors: Seq[STen], dim: Long)(implicit arg0: Sc[S]): STen
  80. def subOut(out: STen, self: STen, other: STen, alpha: Double): Unit
  81. def sumOut(out: STen, self: STen, dim: Seq[Int], keepDim: Boolean): Unit
  82. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  83. def tanh_backward[S](gradOutput: STen, output: STen)(implicit arg0: Sc[S]): STen
  84. def tensorsFromFile[S](path: String, offset: Long, length: Long, pin: Boolean, tensors: List[(Byte, Long, Long)])(implicit arg0: Sc[S]): Vector[STen]

    Create tensors directly from file.

    Create tensors directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

    path

    file path

    offset

    byte offset into the file. Must be page aligned (usually multiple of 4096)

    length

    byte length of the data (all tensors in total)

    pin

    if true the mapped segment will be page locked with mlock(2)

    tensors

    list of tensors with (scalarType, byte offset, byte length), byte offset must be aligned to 8

    returns

    tensor on CPU

  85. def toString(): String
    Definition Classes
    AnyRef → Any
  86. def to_dense_backward[S](gradOutput: STen, input: STen)(implicit arg0: Sc[S]): STen
  87. def triangularSolve[S](b: STen, A: STen, upper: Boolean, transpose: Boolean, uniTriangular: Boolean)(implicit arg0: Sc[S]): STen
  88. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  89. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  90. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  91. def where[S](condition: Tensor, self: STen, other: STen)(implicit arg0: Sc[S]): STen
  92. def where[S](condition: STen, self: STen, other: STen)(implicit arg0: Sc[S]): STen
  93. def zeros[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  94. def zerosLike[S](tensor: STen)(implicit arg0: Sc[S]): STen
  95. def zerosLike[S](tensor: Tensor)(implicit arg0: Sc[S]): STen

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped