The "GEE! It s Simple" package illustrates Gaussian elimination with partial pivoting, which produces a factorization of P*A into the product L*U where P is a permutation matrix, and L and U are lower and upper triangular, respectively.
The functions in this package are accurate, but they are far s ...
The "GEE! It s Simple" package illustrates Gaussian elimination with partial pivoting, which produces a factorization of P*A into the product L*U where P is a permutation matrix, and L and U are lower and upper triangular, respectively.
The functions in this package are accurate, but they are far s ...
This module provides an interface to an alphanumeric display module.
The current version of this driver supports any alphanumeric LCD module based on the:Hitachi HD44780 DOT MATRIX LCD controller.
书系统地介绍MATLAB 7.0的混合编程方法和技巧。全书共分为13章。第1章和第2章介绍MATLAB的基础知识,第3章简要介绍MATLAB混合编程,第4章至第9章分别介绍几种典型的混合编程方法,包括C-MEX、MATLAB引擎、MAT数据文件共享、Mideva、Matrix和Add-in。第10章、第11章介绍MATLAB与Delphi和Excel的混合编程。第12章介绍MATLAB C ...
This toolbox was designed as a teaching aid, which matlab is
particularly good for since source code is relatively legible and
simple to modify. However, it is still reasonably fast if used
with the supplied optimiser. However, if you really want to speed
things up you should consider compiling the ...
PRINCIPLE: The UVE algorithm detects and eliminates from a PLS model (including from 1 to A components) those variables that do not carry any relevant information to model Y. The criterion used to trace the un-informative variables is the reliability of the regression coefficients: c_j=mean(b_j)/std ...
Batch version of the back-propagation algorithm.
% Given a set of corresponding input-output pairs and an initial network
% [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the
% network with backpropagation.
%
% The activation functions must be either linear or tanh. The network
...
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-output pairs and an initial
% network,
% ...
Train a two layer neural network with a recursive prediction error
% algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully
% connected) networks can be trained.
%
% The activation functions can either be linear or tanh. The network
% architecture is defined by the matrix NetDef , w ...