PyTorch

PyTorch è un modulo esterno del linguaggio Python con diverse funzioni dedicate al machine learning e al deep learning.

La libreria PyTorch ha le stesse funzionalità di Numpy per quanto riguarda l'elaborazione degli array multidimensionali ma è molto più ampia e potente.

E' particolarmente utile per elaborare i tensori usando l'accelerazione delle GPU delle schede grafiche. Spesso uso PyTorch nell'elaborazione delle reti neurali ed è molto veloce.

Come installare PyTorch

Per usare PyTorch devo prima installarlo in Python tramite il comando pip.

pip install torch

Se non funziona, posso sempre scaricare e installare il modulo manualmente

Ad esempio, per installare PyTorch su Python 3.7

pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.1-cp37-cp37m-win_amd64.whl
pip3 install torchvision

Ovviamente, il modulo da scaricare cambia a seconda della versione di Python.

Per qualsiasi info aggiornata si può consultare la pagina ufficiale di PyTorch.

Dopo averlo installato, posso importare il modulo PyTorch nell'interprete.

>>> import torch

E così via

Le funzioni e i metodi di PyTorch

Ecco l'elenco completo delle funzioni e dei metodi di PyTorch

  • Argument
  • ArgumentSpec
  • Block
  • BoolType
  • ByteStorage
  • ByteTensor
  • CharStorage
  • CharTensor
  • Code
  • CompleteArgumentSpec
  • DoubleStorage
  • DoubleTensor
  • DynamicType
  • ExecutionPlanState
  • FatalError
  • FloatStorage
  • FloatTensor
  • FloatType
  • FunctionSchema
  • Future
  • Generator
  • Gradient
  • Graph
  • GraphExecutor
  • GraphExecutorState
  • HalfStorage
  • HalfStorageBase
  • HalfTensor
  • IODescriptor
  • IS_CONDA
  • IntStorage
  • IntTensor
  • IntType
  • JITException
  • ListType
  • LongStorage
  • LongTensor
  • Node
  • NumberType
  • PyTorchFileReader
  • PyTorchFileWriter
  • ScriptMethod
  • ScriptModule
  • ShortStorage
  • ShortTensor
  • Size
  • Storage
  • Tensor
  • TracingState
  • TupleType
  • Type
  • Use
  • Value
  • _C
  • _StorageBase
  • __all__
  • __builtins__
  • __cached__
  • __doc__
  • __file__
  • __loader__
  • __name__
  • __package__
  • __path__
  • __spec__
  • __version__
  • _argmax
  • _argmin
  • _baddbmm_mkl_
  • _cast_Byte
  • _cast_Char
  • _cast_Double
  • _cast_Float
  • _cast_Half
  • _cast_Int
  • _cast_Long
  • _cast_Short
  • _convolution
  • _convolution_nogroup
  • _copy_same_type_
  • _ctc_loss
  • _cudnn_ctc_loss
  • _cudnn_init_dropout_state
  • _cudnn_rnn
  • _cudnn_rnn_flatten_weight
  • _cufft_clear_plan_cache
  • _cufft_get_plan_cache_max_size
  • _cufft_get_plan_cache_size
  • _cufft_set_plan_cache_max_size
  • _dim_arange
  • _dirichlet_grad
  • _embedding_bag
  • _fft_with_size
  • _fused_dropout
  • _import_dotted_name
  • _jit_internal
  • _log_softmax
  • _log_softmax_backward_data
  • _masked_scale
  • _np
  • _ops
  • _pack_padded_sequence
  • _pad_packed_sequence
  • _promote_types
  • _reshape_from_tensor
  • _s_copy_from
  • _s_where
  • _shape_as_tensor
  • _six
  • _softmax
  • _softmax_backward_data
  • _sparse_addmm
  • _sparse_mm
  • _sparse_sum
  • _standard_gamma
  • _standard_gamma_grad
  • _storage_classes
  • _string_classes
  • _tensor_classes
  • _tensor_str
  • _thnn
  • _trilinear
  • _unique
  • _unique_dim
  • _utils
  • _utils_internal
  • _weight_norm
  • _weight_norm_cuda_interface
  • abs
  • abs_
  • acos
  • acos_
  • adaptive_avg_pool1d
  • adaptive_max_pool1d
  • add
  • add_extra_dll_dir
  • addbmm
  • addcdiv
  • addcmul
  • addmm
  • addmv
  • addmv_
  • addr
  • affine_grid_generator
  • all
  • allclose
  • alpha_dropout
  • alpha_dropout_
  • any
  • arange
  • argmax
  • argmin
  • argsort
  • as_strided
  • as_strided_
  • as_tensor
  • asin
  • asin_
  • atan
  • atan2
  • atan_
  • autograd
  • avg_pool1d
  • backends
  • baddbmm
  • bartlett_window
  • batch_norm
  • bernoulli
  • bilinear
  • binary_cross_entropy_with_logits
  • bincount
  • blackman_window
  • bmm
  • broadcast_tensors
  • btrifact
  • btrifact_with_info
  • btrisolve
  • btriunpack
  • cat
  • ceil
  • ceil_
  • celu
  • celu_
  • chain_matmul
  • cholesky
  • chunk
  • clamp
  • clamp_
  • clamp_max
  • clamp_max_
  • clamp_min
  • clamp_min_
  • clone
  • compiled_with_cxx11_abi
  • complex128
  • complex32
  • complex64
  • constant_pad_nd
  • conv1d
  • conv2d
  • conv3d
  • conv_tbc
  • conv_transpose1d
  • conv_transpose2d
  • conv_transpose3d
  • convolution
  • cos
  • cos_
  • cosh
  • cosh_
  • cosine_embedding_loss
  • cosine_similarity
  • cross
  • ctc_loss
  • cuda
  • cudnn_affine_grid_generator
  • cudnn_batch_norm
  • cudnn_convolution
  • cudnn_convolution_transpose
  • cudnn_grid_sampler
  • cudnn_is_acceptable
  • cumprod
  • cumsum
  • default_generator
  • det
  • detach
  • detach_
  • device
  • diag
  • diag_embed
  • diagflat
  • diagonal
  • digamma
  • dist
  • distributed
  • distributions
  • div
  • dll_paths
  • dot
  • double
  • dropout
  • dropout_
  • dsmm
  • dtype
  • eig
  • einsum
  • embedding
  • embedding_bag
  • embedding_renorm_
  • empty
  • empty_like
  • empty_strided
  • enable_grad
  • eq
  • equal
  • erf
  • erf_
  • erfc
  • erfc_
  • erfinv
  • exp
  • exp_
  • expm1
  • expm1_
  • eye
  • feature_alpha_dropout
  • feature_alpha_dropout_
  • feature_dropout
  • feature_dropout_
  • fft
  • fill_
  • finfo
  • flatten
  • flip
  • float
  • float16
  • float32
  • float64
  • floor
  • floor_
  • fmod
  • fork
  • frac
  • frobenius_norm
  • from_numpy
  • full
  • full_like
  • functional
  • gather
  • ge
  • gels
  • geqrf
  • ger
  • gesv
  • get_default_dtype
  • get_device
  • get_file_path
  • get_num_threads
  • get_nvToolsExt_path
  • get_rng_state
  • grid_sampler
  • grid_sampler_2d
  • grid_sampler_3d
  • group_norm
  • gru
  • gru_cell
  • gt
  • half
  • hamming_window
  • hann_window
  • hardshrink
  • has_cudnn
  • has_lapack
  • has_mkl
  • hinge_embedding_loss
  • histc
  • hsmm
  • hspmm
  • ifft
  • iinfo
  • import_ir_module
  • import_ir_module_from_buffer
  • index_put
  • index_put_
  • index_select
  • initial_seed
  • instance_norm
  • int
  • int16
  • int32
  • int64
  • int8
  • inverse
  • irfft
  • is_anomaly_enabled
  • is_complex
  • is_distributed
  • is_floating_point
  • is_grad_enabled
  • is_nonzero
  • is_same_size
  • is_signed
  • is_storage
  • is_tensor
  • isclose
  • isfinite
  • isinf
  • isnan
  • jit
  • kl_div
  • kthvalue
  • layer_norm
  • layout
  • le
  • lerp
  • lgamma
  • linspace
  • load
  • log
  • log10
  • log10_
  • log1p
  • log1p_
  • log2
  • log2_
  • log_
  • log_softmax
  • logdet
  • logspace
  • logsumexp
  • long
  • lstm
  • lstm_cell
  • lt
  • manual_seed
  • margin_ranking_loss
  • masked_select
  • matmul
  • matrix_power
  • matrix_rank
  • max
  • max_pool1d_with_indices
  • mean
  • median
  • merge_type_from_type_comment
  • meshgrid
  • min
  • miopen_batch_norm
  • miopen_convolution
  • miopen_convolution_transpose
  • mkldnn_convolution
  • mkldnn_convolution_backward_weights
  • mm
  • mode
  • mul
  • multinomial
  • multiprocessing
  • mv
  • mvlgamma
  • name
  • narrow
  • native_batch_norm
  • native_clone
  • native_norm
  • native_pow
  • native_resize_as_
  • native_zero_
  • ne
  • neg
  • nn
  • no_grad
  • nonzero
  • norm
  • norm_except_dim
  • normal
  • nuclear_norm
  • numel
  • ones
  • ones_like
  • onnx
  • ops
  • optim
  • orgqr
  • ormqr
  • os
  • p
  • pairwise_distance
  • parse_type_comment
  • pdist
  • pin_memory
  • pinverse
  • pixel_shuffle
  • platform
  • poisson
  • polygamma
  • potrf
  • potri
  • potrs
  • pow
  • prelu
  • prepare_multiprocessing_environment
  • prod
  • pstrf
  • py_dll_path
  • qr
  • rand
  • rand_like
  • randint
  • randint_like
  • randn
  • randn_like
  • random
  • randperm
  • range
  • reciprocal
  • register_batch_operator
  • relu
  • relu_
  • remainder
  • renorm
  • reshape
  • resize_as_
  • rfft
  • rnn_relu
  • rnn_relu_cell
  • rnn_tanh
  • rnn_tanh_cell
  • roll
  • rot90
  • round
  • round_
  • rrelu
  • rrelu_
  • rsqrt
  • rsqrt_
  • rsub
  • s_copy_
  • s_native_addmm
  • s_native_addmm_
  • saddmm
  • save
  • select
  • selu
  • selu_
  • serialization
  • set_anomaly_enabled
  • set_default_dtype
  • set_default_tensor_type
  • set_flush_denormal
  • set_grad_enabled
  • set_num_threads
  • set_printoptions
  • set_rng_state
  • short
  • sigmoid
  • sigmoid_
  • sign
  • sin
  • sin_
  • sinh
  • sinh_
  • slogdet
  • smm
  • softmax
  • sort
  • sparse
  • sparse_coo
  • sparse_coo_tensor
  • split
  • split_with_sizes
  • spmm
  • sqrt
  • sqrt_
  • squeeze
  • sspaddmm
  • stack
  • std
  • stft
  • storage
  • strided
  • sub
  • sum
  • svd
  • symeig
  • sys
  • t
  • take
  • tan
  • tan_
  • tanh
  • tanh_
  • tensor
  • tensordot
  • testing
  • th_dll_path
  • threshold
  • threshold_
  • to_batch_graph
  • topk
  • torch
  • trace
  • transpose
  • tril
  • triplet_margin_loss
  • triu
  • trtrs
  • trunc
  • trunc_
  • typename
  • uint8
  • unbind
  • unique
  • unsqueeze
  • utils
  • var
  • version
  • wait
  • where
  • zero_
  • zeros
  • zeros_like


 
Segnalami un errore, un refuso o un suggerimento per migliorare gli appunti

FacebookTwitterLinkedinLinkedin
knowledge base

Libri di approfondimento
  1. Il linguaggio Python
  2. Come installare Python sul PC
  3. Come scrivere un programma in Python
  4. Come usare Python in modalità interattiva
  5. Le variabili
  6. I numeri
  7. Gli operatori logici
  8. Le strutture iterative ( o cicli )
  9. Le strutture condizionali
  10. I file in python