2.7. 文档¶ 在 SageMaker Studio Lab 中打开 Notebook
我们不可能介绍每一个MXNet函数和类(这些信息可能很快就会过时),但是API文档以及额外的教程和示例提供了这些文档。本节将为你探索MXNet API提供一些指导。
from mxnet import np
import jax
2.7.1. 模块中的函数和类¶
要想知道一个模块中可以调用哪些函数和类,我们可以调用 dir
函数。例如,我们可以查询用于生成随机数的模块中的所有属性。
print(dir(torch.distributions))
['AbsTransform', 'AffineTransform', 'Bernoulli', 'Beta', 'Binomial', 'CatTransform', 'Categorical', 'Cauchy', 'Chi2', 'ComposeTransform', 'ContinuousBernoulli', 'CorrCholeskyTransform', 'CumulativeDistributionTransform', 'Dirichlet', 'Distribution', 'ExpTransform', 'Exponential', 'ExponentialFamily', 'FisherSnedecor', 'Gamma', 'Geometric', 'Gumbel', 'HalfCauchy', 'HalfNormal', 'Independent', 'IndependentTransform', 'Kumaraswamy', 'LKJCholesky', 'Laplace', 'LogNormal', 'LogisticNormal', 'LowRankMultivariateNormal', 'LowerCholeskyTransform', 'MixtureSameFamily', 'Multinomial', 'MultivariateNormal', 'NegativeBinomial', 'Normal', 'OneHotCategorical', 'OneHotCategoricalStraightThrough', 'Pareto', 'Poisson', 'PositiveDefiniteTransform', 'PowerTransform', 'RelaxedBernoulli', 'RelaxedOneHotCategorical', 'ReshapeTransform', 'SigmoidTransform', 'SoftmaxTransform', 'SoftplusTransform', 'StackTransform', 'StickBreakingTransform', 'StudentT', 'TanhTransform', 'Transform', 'TransformedDistribution', 'Uniform', 'VonMises', 'Weibull', 'Wishart', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'bernoulli', 'beta', 'biject_to', 'binomial', 'categorical', 'cauchy', 'chi2', 'constraint_registry', 'constraints', 'continuous_bernoulli', 'dirichlet', 'distribution', 'exp_family', 'exponential', 'fishersnedecor', 'gamma', 'geometric', 'gumbel', 'half_cauchy', 'half_normal', 'identity_transform', 'independent', 'kl', 'kl_divergence', 'kumaraswamy', 'laplace', 'lkj_cholesky', 'log_normal', 'logistic_normal', 'lowrank_multivariate_normal', 'mixture_same_family', 'multinomial', 'multivariate_normal', 'negative_binomial', 'normal', 'one_hot_categorical', 'pareto', 'poisson', 'register_kl', 'relaxed_bernoulli', 'relaxed_categorical', 'studentT', 'transform_to', 'transformed_distribution', 'transforms', 'uniform', 'utils', 'von_mises', 'weibull', 'wishart']
print(dir(np.random))
['__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_mx_nd_np', 'beta', 'chisquare', 'choice', 'exponential', 'gamma', 'gumbel', 'logistic', 'lognormal', 'multinomial', 'multivariate_normal', 'normal', 'pareto', 'power', 'rand', 'randint', 'randn', 'rayleigh', 'shuffle', 'uniform', 'weibull']
print(dir(jax.random))
['KeyArray', 'PRNGKey', 'PRNGKeyArray', '_PRNGKeyArray', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'ball', 'bernoulli', 'beta', 'bits', 'categorical', 'cauchy', 'chisquare', 'choice', 'default_prng_impl', 'dirichlet', 'double_sided_maxwell', 'exponential', 'f', 'fold_in', 'gamma', 'generalized_normal', 'geometric', 'gumbel', 'key', 'key_data', 'laplace', 'loggamma', 'logistic', 'maxwell', 'multivariate_normal', 'normal', 'orthogonal', 'pareto', 'permutation', 'poisson', 'rademacher', 'randint', 'random_gamma_p', 'rayleigh', 'rbg_key', 'shuffle', 'split', 't', 'threefry2x32_key', 'threefry2x32_p', 'threefry_2x32', 'truncated_normal', 'typing', 'uniform', 'unsafe_rbg_key', 'wald', 'weibull_min']
print(dir(tf.random))
['Algorithm', 'Generator', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_sys', 'all_candidate_sampler', 'categorical', 'create_rng_state', 'experimental', 'fixed_unigram_candidate_sampler', 'fold_in', 'gamma', 'get_global_generator', 'learned_unigram_candidate_sampler', 'log_uniform_candidate_sampler', 'normal', 'poisson', 'set_global_generator', 'set_seed', 'shuffle', 'split', 'stateless_binomial', 'stateless_categorical', 'stateless_gamma', 'stateless_normal', 'stateless_parameterized_truncated_normal', 'stateless_poisson', 'stateless_truncated_normal', 'stateless_uniform', 'truncated_normal', 'uniform', 'uniform_candidate_sampler']
通常,我们可以忽略以 __
(Python中的特殊对象)开头和结尾的函数,或以单个 _
(通常是内部函数)开头的函数。根据其余的函数名或属性名,我们也许可以猜测该模块提供了各种生成随机数的方法,包括从均匀分布(uniform
)、正态分布(normal
)和多项式分布(multinomial
)中采样。
2.7.2. 特定的函数和类¶
有关如何使用给定函数或类的具体说明,我们可以调用 help
函数。例如,我们来探索张量的 ones
函数的使用说明。
help(torch.ones)
Help on built-in function ones in module torch: ones(...) ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor Returns a tensor filled with the scalar value 1, with the shape defined by the variable argumentsize
. Args: size (int...): a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword arguments: out (Tensor, optional): the output tensor. dtype (torch.dtype
, optional): the desired data type of returned tensor. Default: ifNone
, uses a global default (seetorch.set_default_tensor_type()
). layout (torch.layout
, optional): the desired layout of returned Tensor. Default:torch.strided
. device (torch.device
, optional): the desired device of returned tensor. Default: ifNone
, uses the current device for the default tensor type (seetorch.set_default_tensor_type()
).device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. requires_grad (bool, optional): If autograd should record operations on the returned tensor. Default:False
. Example:: >>> torch.ones(2, 3) tensor([[ 1., 1., 1.], [ 1., 1., 1.]]) >>> torch.ones(5) tensor([ 1., 1., 1., 1., 1.])
help(np.ones)
Help on function ones in module mxnet.numpy: ones(shape, dtype=<class 'numpy.float32'>, order='C', ctx=None) Return a new array of given shape and type, filled with ones. This function currently only supports storing multi-dimensional data in row-major (C-style). Parameters ---------- shape : int or tuple of int The shape of the empty array. dtype : str or numpy.dtype, optional An optional value type. Default is numpy.float32. Note that this behavior is different from NumPy's ones function where float64 is the default value, because float32 is considered as the default data type in deep learning. order : {'C'}, optional, default: 'C' How to store multi-dimensional data in memory, currently only row-major (C-style) is supported. ctx : Context, optional An optional device context (default is the current default context). Returns ------- out : ndarray Array of ones with the given shape, dtype, and ctx. Examples -------- >>> np.ones(5) array([1., 1., 1., 1., 1.]) >>> np.ones((5,), dtype=int) array([1, 1, 1, 1, 1], dtype=int64) >>> np.ones((2, 1)) array([[1.], [1.]]) >>> s = (2,2) >>> np.ones(s) array([[1., 1.], [1., 1.]])
help(jax.numpy.ones)
Help on function ones in module jax._src.numpy.lax_numpy: ones(shape: Any, dtype: Union[Any, str, numpy.dtype, jax._src.SupportsDType, NoneType] = None) -> jax.Array Return a new array of given shape and type, filled with ones. LAX-backend implementation ofnumpy.ones()
. Original docstring below. Parameters ---------- shape : int or sequence of ints Shape of the new array, e.g.,(2, 3)
or2
. dtype : data-type, optional The desired data-type for the array, e.g., numpy.int8. Default is numpy.float64. Returns ------- out : ndarray Array of ones with the given shape, dtype, and order.
help(tf.ones)
Help on function ones in module tensorflow.python.ops.array_ops: ones(shape, dtype=tf.float32, name=None) Creates a tensor with all elements set to one (1). See also tf.ones_like, tf.zeros, tf.fill, tf.eye. This operation returns a tensor of type dtype with shape shape and all elements set to one. >>> tf.ones([3, 4], tf.int32) <tf.Tensor: shape=(3, 4), dtype=int32, numpy= array([[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], dtype=int32)> Args: shape: A list of integers, a tuple of integers, or a 1-D Tensor of type int32. dtype: Optional DType of an element in the resulting Tensor. Default is tf.float32. name: Optional string. A name for the operation. Returns: A Tensor with all elements set to one (1).
从文档中,我们可以看到 ones
函数会创建一个具有指定形状的新张量,并将其所有元素设置为1。只要有可能,你就应该运行一个快速测试来确认你的解释。
torch.ones(4)
tensor([1., 1., 1., 1.])
np.ones(4)
[22:07:42] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for CPU
array([1., 1., 1., 1.])
jax.numpy.ones(4)
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Array([1., 1., 1., 1.], dtype=float32)
tf.ones(4)
<tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 1., 1., 1.], dtype=float32)>
在Jupyter笔记本中,我们可以使用 ?
在另一个窗口中显示文档。例如,list?
会创建一个与 help(list)
内容几乎相同的内容,并将其显示在一个新的浏览器窗口中。此外,如果我们使用两个问号,如 list??
,实现该函数的Python代码也将被显示出来。
官方文档提供了大量本书无法涵盖的描述和示例。我们强调的是能让你快速入门解决实际问题的重要用例,而不是追求全面覆盖。我们也鼓励你研究这些库的源代码,以看到高质量生产代码的示例。这样做,你不仅会成为一名更好的科学家,还会成为一名更好的工程师。