We describe and demonstrate an algorithm that takes as input an
unorganized set of points fx1􀀀 􀀀 xng 􀀀 IR3 on or near an unknown
manifold M, and produces as output a simplicial surface that
approximates M. Neither the topology, the presence of boundaries,
nor the geometry ...
Problem A:放苹果
Time Limit:1000MS Memory Limit:65536K
Total Submit:1094 Accepted:441
Language: not limited
Description
把M个同样的苹果放在N个同样的盘子里,允许有的盘子空着不放,问共有多少种不同的分法?(用K表示)5,1,1和1,5,1 是同一种分法。
Input
第一行是测试数据的数目t(0 <= t <= 20) ...
// chebysheve outlier detection
// this function is used to detect the abnormal value among a set of data
// input:
// delta: a set of data
// flag: discribe which data is already known as outlier
// p: restrict level
// output:
// double[] door : byyond which the data may be considered as a outlie ...
Flex chip implementation
File: UP2FLEX
JTAG jumper settings: down, down, up, up
Input:
Reset - FLEX_PB1
Input n - FLEX_SW switches 1 to 8
Output:
Countdown - two 7-segment LEDs.
Done light - decimal point on Digit1.
Operation:
Setup the binary input n number.
Press the Reset switch.
See the count ...
k-step ahead predictions determined by simulation of the
% one-step ahead neural network predictor. For NNARMAX
% models the residuals are set to zero when calculating the
% predictions. The predictions are compared to the observed output.
%
% Train a two layer neural network with the Levenberg-Marquardt
% method.
%
% If desired, it is possible to use regularization by
% weight decay. Also pruned (ie. not fully connected) networks can
% be trained.
%
% Given a set of corresponding input-output pairs and an initial
% network,
% ...
Train a two layer neural network with a recursive prediction error
% algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully
% connected) networks can be trained.
%
% The activation functions can either be linear or tanh. The network
% architecture is defined by the matrix NetDef , w ...