kaldi的TIMIT实例三

Posted WELEN

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kaldi的TIMIT实例三相关的知识,希望对你有一定的参考价值。

============================================================================
                    MMI + SGMM2 Training & Decoding                       
============================================================================
steps/align_sgmm2.sh --nj 30 --cmd run.pl --mem 4G --transform-dir exp/tri3_ali --use-graphs true --use-gselect true data/train data/lang exp/sgmm2_4 exp/sgmm2_4_ali
steps/align_sgmm2.sh: feature type is lda
steps/align_sgmm2.sh: using transforms from exp/tri3_ali
steps/align_sgmm2.sh: aligning data in data/train using model exp/sgmm2_4/final.alimdl
steps/align_sgmm2.sh: computing speaker vectors (1st pass)
steps/align_sgmm2.sh: computing speaker vectors (2nd pass)
steps/align_sgmm2.sh: doing final alignment.
steps/align_sgmm2.sh: done aligning data.
steps/diagnostic/analyze_alignments.sh --cmd run.pl --mem 4G data/lang exp/sgmm2_4_ali
steps/diagnostic/analyze_alignments.sh: see stats in exp/sgmm2_4_ali/log/analyze_alignments.log
steps/make_denlats_sgmm2.sh --nj 30 --sub-split 30 --acwt 0.2 --lattice-beam 10.0 --beam 18.0 --cmd run.pl --mem 4G --transform-dir exp/tri3_ali data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats
steps/make_denlats_sgmm2.sh: Making unigram grammar FST in exp/sgmm2_4_denlats/lang
steps/make_denlats_sgmm2.sh: Compiling decoding graph in exp/sgmm2_4_denlats/dengraph
tree-info exp/sgmm2_4_ali/tree 
tree-info exp/sgmm2_4_ali/tree 
fsttablecompose exp/sgmm2_4_denlats/lang/L_disambig.fst exp/sgmm2_4_denlats/lang/G.fst 
fstdeterminizestar --use-log=true 
fstminimizeencoded 
fstpushspecial 
fstisstochastic exp/sgmm2_4_denlats/lang/tmp/LG.fst 
1.27271e-05 1.27271e-05
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=exp/sgmm2_4_denlats/lang/phones/disambig.int --write-disambig-syms=exp/sgmm2_4_denlats/lang/tmp/disambig_ilabels_3_1.int exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1.7854 
fstisstochastic exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst 
1.27657e-05 0
make-h-transducer --disambig-syms-out=exp/sgmm2_4_denlats/dengraph/disambig_tid.int --transition-scale=1.0 exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1 exp/sgmm2_4_ali/tree exp/sgmm2_4_ali/final.mdl 
fstdeterminizestar --use-log=true 
fstrmsymbols exp/sgmm2_4_denlats/dengraph/disambig_tid.int 
fsttablecompose exp/sgmm2_4_denlats/dengraph/Ha.fst exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst 
fstrmepslocal 
fstminimizeencoded 
fstisstochastic exp/sgmm2_4_denlats/dengraph/HCLGa.fst 
0.000484977 -0.000485819
add-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4_ali/final.mdl 
steps/make_denlats_sgmm2.sh: feature type is lda
steps/make_denlats_sgmm2.sh: using fMLLR transforms from exp/tri3_ali
steps/make_denlats_sgmm2.sh: Merging archives for data subset 1
steps/make_denlats_sgmm2.sh: Merging archives for data subset 2
steps/make_denlats_sgmm2.sh: Merging archives for data subset 3
steps/make_denlats_sgmm2.sh: Merging archives for data subset 4
steps/make_denlats_sgmm2.sh: Merging archives for data subset 5
steps/make_denlats_sgmm2.sh: Merging archives for data subset 6
steps/make_denlats_sgmm2.sh: Merging archives for data subset 7
steps/make_denlats_sgmm2.sh: Merging archives for data subset 8
steps/make_denlats_sgmm2.sh: Merging archives for data subset 9
steps/make_denlats_sgmm2.sh: Merging archives for data subset 10
steps/make_denlats_sgmm2.sh: Merging archives for data subset 11
steps/make_denlats_sgmm2.sh: Merging archives for data subset 12
steps/make_denlats_sgmm2.sh: Merging archives for data subset 13
steps/make_denlats_sgmm2.sh: Merging archives for data subset 14
steps/make_denlats_sgmm2.sh: Merging archives for data subset 15
steps/make_denlats_sgmm2.sh: Merging archives for data subset 16
steps/make_denlats_sgmm2.sh: Merging archives for data subset 17
steps/make_denlats_sgmm2.sh: Merging archives for data subset 18
steps/make_denlats_sgmm2.sh: Merging archives for data subset 19
steps/make_denlats_sgmm2.sh: Merging archives for data subset 20
steps/make_denlats_sgmm2.sh: Merging archives for data subset 21
steps/make_denlats_sgmm2.sh: Merging archives for data subset 22
steps/make_denlats_sgmm2.sh: Merging archives for data subset 23
steps/make_denlats_sgmm2.sh: Merging archives for data subset 24
steps/make_denlats_sgmm2.sh: Merging archives for data subset 25
steps/make_denlats_sgmm2.sh: Merging archives for data subset 26
steps/make_denlats_sgmm2.sh: Merging archives for data subset 27
steps/make_denlats_sgmm2.sh: Merging archives for data subset 28
steps/make_denlats_sgmm2.sh: Merging archives for data subset 29
steps/make_denlats_sgmm2.sh: Merging archives for data subset 30
steps/make_denlats_sgmm2.sh: done generating denominator lattices with SGMMs.
steps/train_mmi_sgmm2.sh --acwt 0.2 --cmd run.pl --mem 4G --transform-dir exp/tri3_ali --boost 0.1 --drop-frames true data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats exp/sgmm2_4_mmi_b0.1
steps/train_mmi_sgmm2.sh: feature type is lda
steps/train_mmi_sgmm2.sh: using transforms from exp/tri3_ali
steps/train_mmi_sgmm2.sh: using speaker vectors from exp/sgmm2_4_ali
steps/train_mmi_sgmm2.sh: using Gaussian-selection info from exp/sgmm2_4_ali
Iteration 0 of MMI training
Iteration 0: objf was 0.501479915681143, MMI auxf change was 0.0162951148758516
Iteration 1 of MMI training
Iteration 1: objf was 0.516025726621521, MMI auxf change was 0.00230327793795113
Iteration 2 of MMI training
Iteration 2: objf was 0.518545043633159, MMI auxf change was 0.000600702510528323
Iteration 3 of MMI training
Iteration 3: objf was 0.51940583564136, MMI auxf change was 0.000356484531344043
MMI training finished
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 1 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it1
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 1 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it1
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 2 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it2
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 2 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it2
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 3 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it3
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 3 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it3
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 4 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it4
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --mem 4G --iter 4 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it4
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl
============================================================================
                    DNN Hybrid Training & Decoding                        
============================================================================
steps/nnet2/train_tanh.sh --mix-up 5000 --initial-learning-rate 0.015 --final-learning-rate 0.002 --num-hidden-layers 2 --num-jobs-nnet 30 --cmd run.pl --mem 4G data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/train_tanh.sh: calling get_lda.sh
steps/nnet2/get_lda.sh --transform-dir exp/tri3_ali --splice-width 4 --cmd run.pl --mem 4G data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/get_lda.sh: feature type is lda
steps/nnet2/get_lda.sh: using transforms from exp/tri3_ali
feat-to-dim ark,s,cs:utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | - 
transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- 
apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- 
splice-feats --left-context=3 --right-context=3 ark:- ark:- 
transform-feats exp/tri4_nnet/final.mat ark:- ark:- 
WARNING (feat-to-dim[5.2.124~1396-70748]:Close():kaldi-io.cc:501) Pipe utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | had nonzero return status 36096
feat-to-dim ark,s,cs:utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- | - 
apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- 
splice-feats --left-context=4 --right-context=4 ark:- ark:- 
transform-feats exp/tri4_nnet/final.mat ark:- ark:- 
splice-feats --left-context=3 --right-context=3 ark:- ark:- 
transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- 
WARNING (feat-to-dim[5.2.124~1396-70748]:Close():kaldi-io.cc:501) Pipe utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- | had nonzero return status 36096
steps/nnet2/get_lda.sh: Accumulating LDA statistics.
steps/nnet2/get_lda.sh: Finished estimating LDA
steps/nnet2/train_tanh.sh: calling get_egs.sh
steps/nnet2/get_egs.sh --transform-dir exp/tri3_ali --splice-width 4 --samples-per-iter 200000 --num-jobs-nnet 30 --stage 0 --cmd run.pl --mem 4G --io-opts --max-jobs-run 5 data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/get_egs.sh: feature type is lda
steps/nnet2/get_egs.sh: using transforms from exp/tri3_ali
steps/nnet2/get_egs.sh: working out number of frames of training data
utils/data/get_utt2dur.sh: segments file does not exist so getting durations from wave files
utils/data/get_utt2dur.sh: successfully obtained utterance lengths from sphere-file headers
utils/data/get_utt2dur.sh: computed data/train/utt2dur
feat-to-len scp:head -n 10 data/train/feats.scp| ark,t:- 
steps/nnet2/get_egs.sh: Every epoch, splitting the data up into 1 iterations,
steps/nnet2/get_egs.sh: giving samples-per-iteration of 37740 (you requested 200000).
Getting validation and training subset examples.
steps/nnet2/get_egs.sh: extracting validation and training-subset alignments.
copy-int-vector ark:- ark,t:- 
LOG (copy-int-vector[5.2.124~1396-70748]:main():copy-int-vector.cc:83) Copied 3696 vectors of int32.
Getting subsets of validation examples for diagnostics and combination.
Creating training examples
Generating training examples on disk
steps/nnet2/get_egs.sh: rearranging examples into parts for different parallel jobs
steps/nnet2/get_egs.sh: Since iters-per-epoch == 1, just concatenating the data.
Shuffling the order of training examples
(in order to avoid stressing the disk, these wont all run at once).
steps/nnet2/get_egs.sh: Finished preparing training examples
steps/nnet2/train_tanh.sh: initializing neural net
Training transition probabilities and setting priors
steps/nnet2/train_tanh.sh: Will train for 15 + 5 epochs, equalling 
steps/nnet2/train_tanh.sh: 15 + 5 = 20 iterations, 
steps/nnet2/train_tanh.sh: (while reducing learning rate) + (with constant learning rate).
Training neural net (pass 0)
Training neural net (pass 1)
Training neural net (pass 2)
Training neural net (pass 3)
Training neural net (pass 4)
Training neural net (pass 5)
Training neural net (pass 6)
Training neural net (pass 7)
Training neural net (pass 8)
^[[CTraining neural net (pass 9)
Training neural net (pass 10)
Training neural net (pass 11)
Training neural net (pass 12)
Mixing up from 1920 to 5000 components
Training neural net (pass 13)
Training neural net (pass 14)
Training neural net (pass 15)
Training neural net (pass 16)

Training neural net (pass 17)
Training neural net (pass 18)
Training neural net (pass 19)
Setting num_iters_final=5
Getting average posterior for purposes of adjusting the priors.
Re-adjusting priors based on computed posteriors
Done
Cleaning up data
steps/nnet2/remove_egs.sh: Finished deleting examples in exp/tri4_nnet/egs
Removing most of the models
steps/nnet2/decode.sh --cmd run.pl --mem 4G --nj 5 --num-threads 6 --transform-dir exp/tri3/decode_dev exp/tri3/graph data/dev exp/tri4_nnet/decode_dev
steps/nnet2/decode.sh: feature type is lda
steps/nnet2/decode.sh: using transforms from exp/tri3/decode_dev
steps/diagnostic/analyze_lats.sh --cmd run.pl --mem 4G --iter final exp/tri3/graph exp/tri4_nnet/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(7,33,166) and mean=74.3
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_dev/log/analyze_lattice_depth_stats.log
score best paths
score confidence and timing with sclite
Decoding done.
steps/nnet2/decode.sh --cmd run.pl --mem 4G --nj 5 --num-threads 6 --transform-dir exp/tri3/decode_test exp/tri3/graph data/test exp/tri4_nnet/decode_test
steps/nnet2/decode.sh: feature type is lda
steps/nnet2/decode.sh: using transforms from exp/tri3/decode_test
steps/diagnostic/analyze_lats.sh --cmd run.pl --mem 4G --iter final exp/tri3/graph exp/tri4_nnet/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(7,37,190) and mean=89.1
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_test/log/analyze_lattice_depth_stats.log
score best paths
score confidence and timing with sclite
Decoding done.
============================================================================
                    System Combination (DNN+SGMM)                         
============================================================================
============================================================================
               DNN Hybrid Training & Decoding (Karels recipe)            
============================================================================
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --mem 4G --transform-dir exp/tri3/decode_test data-fmllr-tri3/test data/test exp/tri3 data-fmllr-tri3/test/log data-fmllr-tri3/test/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/test to data-fmllr-tri3/test
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/test
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/test --> data-fmllr-tri3/test, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_test
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --mem 4G --transform-dir exp/tri3/decode_dev data-fmllr-tri3/dev data/dev exp/tri3 data-fmllr-tri3/dev/log data-fmllr-tri3/dev/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/dev to data-fmllr-tri3/dev
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/dev
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/dev --> data-fmllr-tri3/dev, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_dev
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --mem 4G --transform-dir exp/tri3_ali data-fmllr-tri3/train data/train exp/tri3 data-fmllr-tri3/train/log data-fmllr-tri3/train/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/train to data-fmllr-tri3/train
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/train
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/train --> data-fmllr-tri3/train, using : raw-trans None, gmm exp/tri3, trans exp/tri3_ali
utils/subset_data_dir_tr_cv.sh data-fmllr-tri3/train data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10
/home/wenba/source_code/kaldi-trunk/egs/timit/s5/utils/subset_data_dir.sh: reducing #utt from 3696 to 3320
/home/wenba/source_code/kaldi-trunk/egs/timit/s5/utils/subset_data_dir.sh: reducing #utt from 3696 to 376
# steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn 
# Started at Fri Sep 15 16:21:28 CST 2017
#
steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn
# INFO
steps/nnet/pretrain_dbn.sh : Pre-training Deep Belief Network as a stack of RBMs
     dir       : exp/dnn4_pretrain-dbn 
     Train-set : data-fmllr-tri3/train 3696

LOG ([5.2.124~1396-70748]:main():cuda-gpu-available.cc:49) 

### IS CUDA GPU AVAILABLE? ‘welen-pc‘ ###
WARNING ([5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5631M, used:444M, total:6076M, free/total:0.926827
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.926827
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.926827
LOG ([5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5601M, used:474M, total:6076M, free/total:0.921889 version 5.2
### HURRAY, WE GOT A CUDA GPU FOR COMPUTATION!!! ##

### Testing CUDA setup with a small computation (setup = cuda-toolkit + gpu-driver + kaldi):
### Test OK!

# PREPARING FEATURES
copy-feats --compress=true scp:data-fmllr-tri3/train/feats.scp ark,scp:/tmp/kaldi.0Hlp/train.ark,exp/dnn4_pretrain-dbn/train_sorted.scp 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
# ‘apply-cmvn‘ not used,
feat-to-dim ark:copy-feats scp:exp/dnn4_pretrain-dbn/train.scp ark:- | - 
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp ark:- 
WARNING (feat-to-dim[5.2.124~1396-70748]:Close():kaldi-io.cc:501) Pipe copy-feats scp:exp/dnn4_pretrain-dbn/train.scp ark:- | had nonzero return status 36096
# feature dim : 40 (input of ‘feature_transform‘)
+ default feature_transform_proto with splice +/-5 frames
nnet-initialize --binary=false exp/dnn4_pretrain-dbn/splice5.proto exp/dnn4_pretrain-dbn/tr_splice5.nnet 
VLOG[1] (nnet-initialize[5.2.124~1396-70748]:Init():nnet-nnet.cc:314) <Splice> <InputDim> 40 <OutputDim> 440 <BuildVector> -5:5 </BuildVector>
LOG (nnet-initialize[5.2.124~1396-70748]:main():nnet-initialize.cc:63) Written initialized model to exp/dnn4_pretrain-dbn/tr_splice5.nnet
# compute normalization stats from 10k sentences
compute-cmvn-stats ark:- exp/dnn4_pretrain-dbn/cmvn-g.stats 
nnet-forward --print-args=true --use-gpu=yes exp/dnn4_pretrain-dbn/tr_splice5.nnet ark:copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- | ark:- 
WARNING (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5631M, used:444M, total:6076M, free/total:0.926827
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.926827
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.926827
LOG (nnet-forward[5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5601M, used:474M, total:6076M, free/total:0.921889 version 5.2
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
LOG (nnet-forward[5.2.124~1396-70748]:main():nnet-forward.cc:192) Done 3696 files in 0.169012min, (fps 110921)
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to exp/dnn4_pretrain-dbn/cmvn-g.stats
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
# + normalization of NN-input at ‘exp/dnn4_pretrain-dbn/tr_splice5_cmvn-g.nnet‘
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/tr_splice5.nnet
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating cmvn-to-nnet exp/dnn4_pretrain-dbn/cmvn-g.stats -|
cmvn-to-nnet exp/dnn4_pretrain-dbn/cmvn-g.stats - 
LOG (cmvn-to-nnet[5.2.124~1396-70748]:main():cmvn-to-nnet.cc:114) Written cmvn in nnet1 model to: -
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/tr_splice5_cmvn-g.nnet

### Showing the final ‘feature_transform‘:
nnet-info exp/dnn4_pretrain-dbn/tr_splice5_cmvn-g.nnet 
LOG (nnet-info[5.2.124~1396-70748]:main():nnet-info.cc:57) Printed info about exp/dnn4_pretrain-dbn/tr_splice5_cmvn-g.nnet
num-components 3
input-dim 40
output-dim 440
number-of-parameters 0.00088 millions
component 1 : <Splice>, input-dim 40, output-dim 440, 
  frame_offsets [ -5 -4 -3 -2 -1 0 1 2 3 4 5 ]
component 2 : <AddShift>, input-dim 440, output-dim 440, 
  shift_data ( min -0.190971, max 0.0653268, mean -0.00386261, stddev 0.0387812, skewness -2.779, kurtosis 9.50268 ) , lr-coef 0
component 3 : <Rescale>, input-dim 440, output-dim 440, 
  scale_data ( min 0.318242, max 0.975994, mean 0.761088, stddev 0.156292, skewness -0.679222, kurtosis -0.139114 ) , lr-coef 0
###

# PRE-TRAINING RBM LAYER 1
# initializing ‘exp/dnn4_pretrain-dbn/1.rbm.init‘
# pretraining ‘exp/dnn4_pretrain-dbn/1.rbm‘ (input gauss, lrate 0.01, iters 40)
# converting RBM to exp/dnn4_pretrain-dbn/1.dbn
rbm-convert-to-nnet exp/dnn4_pretrain-dbn/1.rbm exp/dnn4_pretrain-dbn/1.dbn 
LOG (rbm-convert-to-nnet[5.2.124~1396-70748]:main():rbm-convert-to-nnet.cc:69) Written model to exp/dnn4_pretrain-dbn/1.dbn

# PRE-TRAINING RBM LAYER 2
# computing cmvn stats ‘exp/dnn4_pretrain-dbn/2.cmvn‘ for RBM initialization
WARNING (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5632M, used:443M, total:6076M, free/total:0.92697
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.92697
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.92697
LOG (nnet-forward[5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5602M, used:473M, total:6076M, free/total:0.922033 version 5.2
nnet-concat exp/dnn4_pretrain-dbn/final.feature_transform exp/dnn4_pretrain-dbn/1.dbn - 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/final.feature_transform
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/1.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to -
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
LOG (nnet-forward[5.2.124~1396-70748]:main():nnet-forward.cc:192) Done 3696 files in 0.359421min, (fps 52159)
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to standard output
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
LOG (cmvn-to-nnet[5.2.124~1396-70748]:main():cmvn-to-nnet.cc:114) Written cmvn in nnet1 model to: exp/dnn4_pretrain-dbn/2.cmvn
initializing exp/dnn4_pretrain-dbn/2.rbm.init
pretraining exp/dnn4_pretrain-dbn/2.rbm (lrate 0.4, iters 20)
# appending RBM to exp/dnn4_pretrain-dbn/2.dbn
nnet-concat exp/dnn4_pretrain-dbn/1.dbn rbm-convert-to-nnet exp/dnn4_pretrain-dbn/2.rbm - | exp/dnn4_pretrain-dbn/2.dbn 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/1.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating rbm-convert-to-nnet exp/dnn4_pretrain-dbn/2.rbm - |
rbm-convert-to-nnet exp/dnn4_pretrain-dbn/2.rbm - 
LOG (rbm-convert-to-nnet[5.2.124~1396-70748]:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/2.dbn

# PRE-TRAINING RBM LAYER 3
# computing cmvn stats ‘exp/dnn4_pretrain-dbn/3.cmvn‘ for RBM initialization
WARNING (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5632M, used:443M, total:6076M, free/total:0.92697
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.92697
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.92697
LOG (nnet-forward[5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5602M, used:473M, total:6076M, free/total:0.922033 version 5.2
nnet-concat exp/dnn4_pretrain-dbn/final.feature_transform exp/dnn4_pretrain-dbn/2.dbn - 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/final.feature_transform
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/2.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to -
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
LOG (nnet-forward[5.2.124~1396-70748]:main():nnet-forward.cc:192) Done 3696 files in 0.35735min, (fps 52461.3)
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to standard output
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
LOG (cmvn-to-nnet[5.2.124~1396-70748]:main():cmvn-to-nnet.cc:114) Written cmvn in nnet1 model to: exp/dnn4_pretrain-dbn/3.cmvn
initializing exp/dnn4_pretrain-dbn/3.rbm.init
pretraining exp/dnn4_pretrain-dbn/3.rbm (lrate 0.4, iters 20)
# appending RBM to exp/dnn4_pretrain-dbn/3.dbn
nnet-concat exp/dnn4_pretrain-dbn/2.dbn rbm-convert-to-nnet exp/dnn4_pretrain-dbn/3.rbm - | exp/dnn4_pretrain-dbn/3.dbn 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/2.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating rbm-convert-to-nnet exp/dnn4_pretrain-dbn/3.rbm - |
rbm-convert-to-nnet exp/dnn4_pretrain-dbn/3.rbm - 
LOG (rbm-convert-to-nnet[5.2.124~1396-70748]:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/3.dbn

# PRE-TRAINING RBM LAYER 4
# computing cmvn stats ‘exp/dnn4_pretrain-dbn/4.cmvn‘ for RBM initialization
WARNING (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5602M, used:474M, total:6076M, free/total:0.921992
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.921992
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.921992
LOG (nnet-forward[5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5572M, used:504M, total:6076M, free/total:0.917055 version 5.2
nnet-concat exp/dnn4_pretrain-dbn/final.feature_transform exp/dnn4_pretrain-dbn/3.dbn - 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/final.feature_transform
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/3.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to -
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
LOG (nnet-forward[5.2.124~1396-70748]:main():nnet-forward.cc:192) Done 3696 files in 0.435866min, (fps 43011)
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to standard output
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
LOG (cmvn-to-nnet[5.2.124~1396-70748]:main():cmvn-to-nnet.cc:114) Written cmvn in nnet1 model to: exp/dnn4_pretrain-dbn/4.cmvn
initializing exp/dnn4_pretrain-dbn/4.rbm.init
pretraining exp/dnn4_pretrain-dbn/4.rbm (lrate 0.4, iters 20)
# appending RBM to exp/dnn4_pretrain-dbn/4.dbn
nnet-concat exp/dnn4_pretrain-dbn/3.dbn rbm-convert-to-nnet exp/dnn4_pretrain-dbn/4.rbm - | exp/dnn4_pretrain-dbn/4.dbn 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/3.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating rbm-convert-to-nnet exp/dnn4_pretrain-dbn/4.rbm - |
rbm-convert-to-nnet exp/dnn4_pretrain-dbn/4.rbm - 
LOG (rbm-convert-to-nnet[5.2.124~1396-70748]:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/4.dbn

# PRE-TRAINING RBM LAYER 5
# computing cmvn stats ‘exp/dnn4_pretrain-dbn/5.cmvn‘ for RBM initialization
WARNING (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5578M, used:498M, total:6076M, free/total:0.918022
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.918022
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.918022
LOG (nnet-forward[5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5548M, used:528M, total:6076M, free/total:0.913085 version 5.2
nnet-concat exp/dnn4_pretrain-dbn/final.feature_transform exp/dnn4_pretrain-dbn/4.dbn - 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/final.feature_transform
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/4.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to -
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
LOG (nnet-forward[5.2.124~1396-70748]:main():nnet-forward.cc:192) Done 3696 files in 0.364739min, (fps 51398.6)
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to standard output
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
LOG (cmvn-to-nnet[5.2.124~1396-70748]:main():cmvn-to-nnet.cc:114) Written cmvn in nnet1 model to: exp/dnn4_pretrain-dbn/5.cmvn
initializing exp/dnn4_pretrain-dbn/5.rbm.init
pretraining exp/dnn4_pretrain-dbn/5.rbm (lrate 0.4, iters 20)
# appending RBM to exp/dnn4_pretrain-dbn/5.dbn
nnet-concat exp/dnn4_pretrain-dbn/4.dbn rbm-convert-to-nnet exp/dnn4_pretrain-dbn/5.rbm - | exp/dnn4_pretrain-dbn/5.dbn 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/4.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating rbm-convert-to-nnet exp/dnn4_pretrain-dbn/5.rbm - |
rbm-convert-to-nnet exp/dnn4_pretrain-dbn/5.rbm - 
LOG (rbm-convert-to-nnet[5.2.124~1396-70748]:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/5.dbn

# PRE-TRAINING RBM LAYER 6
# computing cmvn stats ‘exp/dnn4_pretrain-dbn/6.cmvn‘ for RBM initialization
WARNING (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5605M, used:470M, total:6076M, free/total:0.922527
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.922527
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.922527
LOG (nnet-forward[5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5575M, used:500M, total:6076M, free/total:0.91759 version 5.2
nnet-concat exp/dnn4_pretrain-dbn/final.feature_transform exp/dnn4_pretrain-dbn/5.dbn - 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/final.feature_transform
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/5.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to -
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp.10k ark:- 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3696 feature matrices.
LOG (nnet-forward[5.2.124~1396-70748]:main():nnet-forward.cc:192) Done 3696 files in 0.403902min, (fps 46414.9)
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to standard output
LOG (compute-cmvn-stats[5.2.124~1396-70748]:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
LOG (cmvn-to-nnet[5.2.124~1396-70748]:main():cmvn-to-nnet.cc:114) Written cmvn in nnet1 model to: exp/dnn4_pretrain-dbn/6.cmvn
initializing exp/dnn4_pretrain-dbn/6.rbm.init
pretraining exp/dnn4_pretrain-dbn/6.rbm (lrate 0.4, iters 20)
# appending RBM to exp/dnn4_pretrain-dbn/6.dbn
nnet-concat exp/dnn4_pretrain-dbn/5.dbn rbm-convert-to-nnet exp/dnn4_pretrain-dbn/6.rbm - | exp/dnn4_pretrain-dbn/6.dbn 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/5.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating rbm-convert-to-nnet exp/dnn4_pretrain-dbn/6.rbm - |
rbm-convert-to-nnet exp/dnn4_pretrain-dbn/6.rbm - 
LOG (rbm-convert-to-nnet[5.2.124~1396-70748]:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/6.dbn

# REPORT
# RBM pre-training progress (line per-layer)
exp/dnn4_pretrain-dbn/log/rbm.1.log:progress: [69.2148 60.1847 57.6252 55.9637 54.9476 54.1765 53.5664 53.1591 52.8667 52.6254 52.3768 52.223 51.9968 51.8111 51.7657 51.5485 51.4773 51.4296 51.2532 51.2139 51.1692 51.0821 51.0227 50.9377 50.9296 50.898 50.8085 50.8238 50.7534 50.7348 50.7076 50.6632 50.6615 50.6518 50.6302 50.5964 50.5804 50.5564 50.513 50.5592 50.4797 50.4993 50.51 50.4201 50.4663 50.4791 50.4227 50.4098 50.3793 50.4227 50.389 50.3265 50.4077 50.3461 50.3283 50.3104 50.3 50.3348 50.3243 50.34 50.2997 50.3534 50.3074 50.2881 50.3791 50.3117 50.334 50.3648 50.2924 50.3353 50.3746 50.3269 50.344 50.3242 50.3575 50.3607 50.3162 50.3958 50.321 50.3486 50.3444 50.3016 50.346 50.3465 50.3456 50.3346 50.3631 50.322 50.2912 50.355 50.3015 50.3125 50.3776 50.2883 50.3309 50.366 50.3244 50.3295 50.3156 50.357 50.3315 50.2989 50.3498 50.3214 50.2986 50.326 50.2989 50.3249 50.3351 50.33 50.3258 50.3376 50.2808 50.2987 50.3308 50.2853 50.2821 50.3463 50.2576 50.3191 50.3351 50.2726 50.311 50.2955 ]
exp/dnn4_pretrain-dbn/log/rbm.2.log:progress: [9.47593 6.68198 5.9925 5.79754 5.69381 5.61983 5.57938 5.55269 5.51679 5.50825 5.48539 5.46428 5.45513 5.44116 5.42658 5.40876 5.40853 5.39329 5.37245 5.37511 5.3633 5.3483 5.34547 5.3275 5.32134 5.31471 5.29894 5.29564 5.28453 5.27386 5.25925 5.25488 5.24767 5.23009 5.22815 5.2176 5.20372 5.19856 5.19204 5.18034 5.17081 5.16935 5.15679 5.14222 5.14116 5.13264 5.11946 5.11546 5.10223 5.0995 5.09251 5.08136 5.07479 5.06829 5.05639 5.05179 5.04034 5.04904 5.04462 5.05154 5.04966 5.04821 ]
exp/dnn4_pretrain-dbn/log/rbm.3.log:progress: [8.82161 5.93724 5.17477 4.88171 4.72492 4.62911 4.57975 4.54986 4.5189 4.51286 4.49688 4.47944 4.4744 4.46463 4.4546 4.44478 4.44225 4.4341 4.42407 4.42035 4.41003 4.40472 4.39895 4.38925 4.38713 4.38257 4.37139 4.36597 4.35891 4.35509 4.34409 4.33953 4.3354 4.32333 4.32584 4.3164 4.30591 4.30421 4.29929 4.2915 4.28396 4.28321 4.27594 4.26477 4.26634 4.2582 4.25073 4.24657 4.23891 4.23711 4.23202 4.22429 4.22139 4.22028 4.21185 4.20091 4.20112 4.20474 4.19747 4.20566 4.20243 4.20412 ]
exp/dnn4_pretrain-dbn/log/rbm.4.log:progress: [6.47138 4.39349 3.95403 3.77159 3.67085 3.60859 3.57098 3.54628 3.51907 3.51778 3.50106 3.48814 3.48639 3.47567 3.46829 3.46467 3.4607 3.45052 3.4467 3.44301 3.44124 3.43569 3.42892 3.4242 3.42692 3.41555 3.4108 3.40813 3.40705 3.40038 3.39258 3.39316 3.38946 3.37972 3.3854 3.37847 3.37083 3.37235 3.36492 3.35975 3.36044 3.35709 3.34989 3.34552 3.34573 3.34155 3.34065 3.33502 3.33037 3.33208 3.32376 3.32243 3.31974 3.31912 3.31325 3.30906 3.30656 3.31356 3.30566 3.31492 3.31235 3.30959 ]
exp/dnn4_pretrain-dbn/log/rbm.5.log:progress: [6.31728 4.20671 3.63834 3.39393 3.24875 3.17134 3.13306 3.10876 3.08828 3.08771 3.07425 3.06279 3.06532 3.0552 3.04786 3.04868 3.04386 3.03695 3.03614 3.03225 3.02967 3.02828 3.01727 3.01428 3.01825 3.01191 3.0072 3.00605 3.00383 2.99691 2.99246 2.99273 2.99128 2.98259 2.98507 2.98083 2.97321 2.97814 2.97144 2.96666 2.96739 2.96531 2.95768 2.95748 2.95505 2.95458 2.95264 2.94641 2.94112 2.94598 2.94086 2.93676 2.93529 2.93577 2.92937 2.92661 2.92618 2.92859 2.92679 2.93277 2.93028 2.92924 ]
exp/dnn4_pretrain-dbn/log/rbm.6.log:progress: [4.69362 3.23743 2.88821 2.72413 2.63688 2.58968 2.55744 2.53789 2.51894 2.51731 2.50398 2.49555 2.49793 2.49135 2.48345 2.48715 2.47985 2.47587 2.47577 2.46986 2.46492 2.46922 2.46131 2.46046 2.46231 2.45514 2.45273 2.45314 2.45089 2.44601 2.44129 2.44196 2.44055 2.43424 2.44036 2.43207 2.42852 2.43271 2.42498 2.42388 2.4243 2.42063 2.41817 2.41862 2.41386 2.4136 2.4153 2.40774 2.40672 2.41052 2.40388 2.40278 2.4025 2.40106 2.39694 2.39437 2.39436 2.39827 2.39453 2.40005 2.3956 2.39306 ]

Pre-training finished.
# Removing features tmpdir /tmp/kaldi.0Hlp @ welen-pc
train.ark
# Accounting: time=2620 threads=1
# Ended (code 0) at Fri Sep 15 17:05:08 CST 2017, elapsed time 2620 seconds
# steps/nnet/train.sh --feature-transform exp/dnn4_pretrain-dbn/final.feature_transform --dbn exp/dnn4_pretrain-dbn/6.dbn --hid-layers 0 --learn-rate 0.008 data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10 data/lang exp/tri3_ali exp/tri3_ali exp/dnn4_pretrain-dbn_dnn 
# Started at Fri Sep 15 17:05:08 CST 2017
#
steps/nnet/train.sh --feature-transform exp/dnn4_pretrain-dbn/final.feature_transform --dbn exp/dnn4_pretrain-dbn/6.dbn --hid-layers 0 --learn-rate 0.008 data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10 data/lang exp/tri3_ali exp/tri3_ali exp/dnn4_pretrain-dbn_dnn

# INFO
steps/nnet/train.sh : Training Neural Network
     dir       : exp/dnn4_pretrain-dbn_dnn 
     Train-set : data-fmllr-tri3/train_tr90 3320, exp/tri3_ali 
     CV-set    : data-fmllr-tri3/train_cv10 376 exp/tri3_ali 

LOG ([5.2.124~1396-70748]:main():cuda-gpu-available.cc:49) 

### IS CUDA GPU AVAILABLE? ‘welen-pc‘ ###
WARNING ([5.2.124~1396-70748]:SelectGpuId():cu-device.cc:182) Suggestion: use nvidia-smi -c 3 to set compute exclusive mode
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:300) Selecting from 1 GPUs
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:315) cudaSetDevice(0): GeForce GTX 980 Ti    free:5603M, used:473M, total:6076M, free/total:0.922136
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:364) Trying to select device: 0 (automatically), mem_ratio: 0.922136
LOG ([5.2.124~1396-70748]:SelectGpuIdAuto():cu-device.cc:383) Success selecting device 0 free mem ratio: 0.922136
LOG ([5.2.124~1396-70748]:FinalizeActiveGpu():cu-device.cc:225) The active GPU is [0]: GeForce GTX 980 Ti    free:5573M, used:503M, total:6076M, free/total:0.917199 version 5.2
### HURRAY, WE GOT A CUDA GPU FOR COMPUTATION!!! ##

### Testing CUDA setup with a small computation (setup = cuda-toolkit + gpu-driver + kaldi):
### Test OK!

# PREPARING ALIGNMENTS
Using PDF targets from dirs exp/tri3_ali exp/tri3_ali
hmm-info exp/tri3_ali/final.mdl 
copy-transition-model --binary=false exp/tri3_ali/final.mdl exp/dnn4_pretrain-dbn_dnn/final.mdl 
LOG (copy-transition-model[5.2.124~1396-70748]:main():copy-transition-model.cc:62) Copied transition model.

# PREPARING FEATURES
# re-saving features to local disk,
copy-feats --compress=true scp:data-fmllr-tri3/train_tr90/feats.scp ark,scp:/tmp/kaldi.B9DQ/train.ark,exp/dnn4_pretrain-dbn_dnn/train_sorted.scp 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 3320 feature matrices.
copy-feats --compress=true scp:data-fmllr-tri3/train_cv10/feats.scp ark,scp:/tmp/kaldi.B9DQ/cv.ark,exp/dnn4_pretrain-dbn_dnn/cv.scp 
LOG (copy-feats[5.2.124~1396-70748]:main():copy-feats.cc:143) Copied 376 feature matrices.
# importing feature settings from dir ‘exp/dnn4_pretrain-dbn‘
# cmvn_opts=‘‘ delta_opts=‘‘ ivector_dim=‘‘
# ‘apply-cmvn‘ is not used,
feat-to-dim ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | - 
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- 
WARNING (feat-to-dim[5.2.124~1396-70748]:Close():kaldi-io.cc:501) Pipe copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | had nonzero return status 36096
# feature dim : 40 (input of ‘feature_transform‘)
# importing ‘feature_transform‘ from ‘exp/dnn4_pretrain-dbn/final.feature_transform‘

### Showing the final ‘feature_transform‘:
nnet-info exp/dnn4_pretrain-dbn_dnn/imported_final.feature_transform 
LOG (nnet-info[5.2.124~1396-70748]:main():nnet-info.cc:57) Printed info about exp/dnn4_pretrain-dbn_dnn/imported_final.feature_transform
num-components 3
input-dim 40
output-dim 440
number-of-parameters 0.00088 millions
component 1 : <Splice>, input-dim 40, output-dim 440, 
  frame_offsets [ -5 -4 -3 -2 -1 0 1 2 3 4 5 ]
component 2 : <AddShift>, input-dim 440, output-dim 440, 
  shift_data ( min -0.190971, max 0.0653268, mean -0.00386261, stddev 0.0387812, skewness -2.779, kurtosis 9.50268 ) , lr-coef 0
component 3 : <Rescale>, input-dim 440, output-dim 440, 
  scale_data ( min 0.318242, max 0.975994, mean 0.761088, stddev 0.156292, skewness -0.679222, kurtosis -0.139114 ) , lr-coef 0
###

# NN-INITIALIZATION
# getting input/output dims :
feat-to-dim ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | nnet-forward "nnet-concat exp/dnn4_pretrain-dbn_dnn/final.feature_transform \‘‘exp/dnn4_pretrain-dbn/6.dbn\‘ -|" ark:- ark:- |‘ - 
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- 
nnet-forward "nnet-concat exp/dnn4_pretrain-dbn_dnn/final.feature_transform ‘exp/dnn4_pretrain-dbn/6.dbn‘ -|" ark:- ark:- 
LOG (nnet-forward[5.2.124~1396-70748]:SelectGpuId():cu-device.cc:110) Manually selected to compute on CPU.
nnet-concat exp/dnn4_pretrain-dbn_dnn/final.feature_transform exp/dnn4_pretrain-dbn/6.dbn - 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn_dnn/final.feature_transform
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/6.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to -
WARNING (feat-to-dim[5.2.124~1396-70748]:Close():kaldi-io.cc:501) Pipe copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | nnet-forward "nnet-concat exp/dnn4_pretrain-dbn_dnn/final.feature_transform ‘exp/dnn4_pretrain-dbn/6.dbn‘ -|" ark:- ark:- | had nonzero return status 36096
# genrating network prototype exp/dnn4_pretrain-dbn_dnn/nnet.proto
[utils/nnet/make_nnet_proto.py, 1024, 1920, 0, 1024]
# initializing the NN ‘exp/dnn4_pretrain-dbn_dnn/nnet.proto‘ -> ‘exp/dnn4_pretrain-dbn_dnn/nnet.init‘
nnet-initialize --seed=777 exp/dnn4_pretrain-dbn_dnn/nnet.proto exp/dnn4_pretrain-dbn_dnn/nnet.init 
VLOG[1] (nnet-initialize[5.2.124~1396-70748]:Init():nnet-nnet.cc:314) <AffineTransform> <InputDim> 1024 <OutputDim> 1920 <BiasMean> 0.000000 <BiasRange> 0.000000 <ParamStddev> 0.091225
VLOG[1] (nnet-initialize[5.2.124~1396-70748]:Init():nnet-nnet.cc:314) <Softmax> <InputDim> 1920 <OutputDim> 1920
VLOG[1] (nnet-initialize[5.2.124~1396-70748]:Init():nnet-nnet.cc:314) </NnetProto>
LOG (nnet-initialize[5.2.124~1396-70748]:main():nnet-initialize.cc:63) Written initialized model to exp/dnn4_pretrain-dbn_dnn/nnet.init
nnet-concat exp/dnn4_pretrain-dbn/6.dbn exp/dnn4_pretrain-dbn_dnn/nnet.init exp/dnn4_pretrain-dbn_dnn/nnet_dbn_dnn.init 
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/6.dbn
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn_dnn/nnet.init
LOG (nnet-concat[5.2.124~1396-70748]:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn_dnn/nnet_dbn_dnn.init

# RUNNING THE NN-TRAINING SCHEDULER
steps/nnet/train_scheduler.sh --feature-transform exp/dnn4_pretrain-dbn_dnn/final.feature_transform --learn-rate 0.008 exp/dnn4_pretrain-dbn_dnn/nnet_dbn_dnn.init ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/cv.scp ark:- | ark:ali-to-pdf exp/tri3_ali/final.mdl "ark:gunzip -c exp/tri3_ali/ali.*.gz |" ark:- | ali-to-post ark:- ark:- | ark:ali-to-pdf exp/tri3_ali/final.mdl "ark:gunzip -c exp/tri3_ali/ali.*.gz |" ark:- | ali-to-post ark:- ark:- | exp/dnn4_pretrain-dbn_dnn
CROSSVAL PRERUN AVG.LOSS 7.7375 (Xent),
ITERATION 01: TRAIN AVG.LOSS 2.1049, (lrate0.008), CROSSVAL AVG.LOSS 1.9152, nnet accepted (nnet_dbn_dnn_iter01_learnrate0.008_tr2.1049_cv1.9152)
ITERATION 02: TRAIN AVG.LOSS 1.4000, (lrate0.008), CROSSVAL AVG.LOSS 1.8016, nnet accepted (nnet_dbn_dnn_iter02_learnrate0.008_tr1.4000_cv1.8016)
ITERATION 03: TRAIN AVG.LOSS 1.1952, (lrate0.008), CROSSVAL AVG.LOSS 1.7656, nnet accepted (nnet_dbn_dnn_iter03_learnrate0.008_tr1.1952_cv1.7656)
ITERATION 04: TRAIN AVG.LOSS 1.0485, (lrate0.008), CROSSVAL AVG.LOSS 1.7654, nnet accepted (nnet_dbn_dnn_iter04_learnrate0.008_tr1.0485_cv1.7654)
ITERATION 05: TRAIN AVG.LOSS 0.8894, (lrate0.004), CROSSVAL AVG.LOSS 1.6597, nnet accepted (nnet_dbn_dnn_iter05_learnrate0.004_tr0.8894_cv1.6597)
ITERATION 06: TRAIN AVG.LOSS 0.8144, (lrate0.002), CROSSVAL AVG.LOSS 1.5910, nnet accepted (nnet_dbn_dnn_iter06_learnrate0.002_tr0.8144_cv1.5910)
ITERATION 07: TRAIN AVG.LOSS 0.7825, (lrate0.001), CROSSVAL AVG.LOSS 1.5467, nnet accepted (nnet_dbn_dnn_iter07_learnrate0.001_tr0.7825_cv1.5467)
ITERATION 08: TRAIN AVG.LOSS 0.7686, (lrate0.0005), CROSSVAL AVG.LOSS 1.5175, nnet accepted (nnet_dbn_dnn_iter08_learnrate0.0005_tr0.7686_cv1.5175)
ITERATION 09: TRAIN AVG.LOSS 0.7614, (lrate0.00025), CROSSVAL AVG.LOSS 1.4999, nnet accepted (nnet_dbn_dnn_iter09_learnrate0.00025_tr0.7614_cv1.4999)
ITERATION 10: TRAIN AVG.LOSS 0.7570, (lrate0.000125), CROSSVAL AVG.LOSS 1.4902, nnet accepted (nnet_dbn_dnn_iter10_learnrate0.000125_tr0.7570_cv1.4902)
ITERATION 11: TRAIN AVG.LOSS 0.7540, (lrate6.25e-05), CROSSVAL AVG.LOSS 1.4857, nnet accepted (nnet_dbn_dnn_iter11_learnrate6.25e-05_tr0.7540_cv1.4857)
ITERATION 12: TRAIN AVG.LOSS 0.7520, (lrate3.125e-05), CROSSVAL AVG.LOSS 1.4839, nnet accepted (nnet_dbn_dnn_iter12_learnrate3.125e-05_tr0.7520_cv1.4839)
ITERATION 13: TRAIN AVG.LOSS 0.7508, (lrate1.5625e-05), CROSSVAL AVG.LOSS 1.4832, nnet accepted (nnet_dbn_dnn_iter13_learnrate1.5625e-05_tr0.7508_cv1.4832)
finished, too small rel. improvement 0.000478475
steps/nnet/train_scheduler.sh: Succeeded training the Neural Network : exp/dnn4_pretrain-dbn_dnn/final.nnet
steps/nnet/train.sh: Successfuly finished. exp/dnn4_pretrain-dbn_dnn
steps/nnet/decode.sh --nj 20 --cmd run.pl --mem 4G --acwt 0.2 exp/tri3/graph data-fmllr-tri3/test exp/dnn4_pretrain-dbn_dnn/decode_test
# Removing features tmpdir /tmp/kaldi.B9DQ @ welen-pc
cv.ark
train.ark
# Accounting: time=375 threads=1
# Ended (code 0) at Fri Sep 15 17:11:23 CST 2017, elapsed time 375 seconds
steps/nnet/decode.sh --nj 20 --cmd run.pl --mem 4G --acwt 0.2 exp/tri3/graph data-fmllr-tri3/dev exp/dnn4_pretrain-dbn_dnn/decode_dev
steps/nnet/align.sh --nj 20 --cmd run.pl --mem 4G data-fmllr-tri3/train data/lang exp/dnn4_pretrain-dbn_dnn exp/dnn4_pretrain-dbn_dnn_ali
steps/nnet/align.sh: aligning data data-fmllr-tri3/train using nnet/model exp/dnn4_pretrain-dbn_dnn, putting alignments in exp/dnn4_pretrain-dbn_dnn_ali
steps/nnet/align.sh: done aligning data.
steps/nnet/make_denlats.sh --nj 20 --cmd run.pl --mem 4G --acwt 0.2 --lattice-beam 10.0 --beam 18.0 data-fmllr-tri3/train data/lang exp/dnn4_pretrain-dbn_dnn exp/dnn4_pretrain-dbn_dnn_denlats
Making unigram grammar FST in exp/dnn4_pretrain-dbn_dnn_denlats/lang
Compiling decoding graph in exp/dnn4_pretrain-dbn_dnn_denlats/dengraph
tree-info exp/dnn4_pretrain-dbn_dnn/tree 
tree-info exp/dnn4_pretrain-dbn_dnn/tree 
fsttablecompose exp/dnn4_pretrain-dbn_dnn_denlats/lang/L_disambig.fst exp/dnn4_pretrain-dbn_dnn_denlats/lang/G.fst 
fstminimizeencoded 
fstdeterminizestar --use-log=true 
fstpushspecial 
fstisstochastic exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/LG.fst 
1.27271e-05 1.27271e-05
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=exp/dnn4_pretrain-dbn_dnn_denlats/lang/phones/disambig.int --write-disambig-syms=exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/disambig_ilabels_3_1.int exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/ilabels_3_1.19787 
fstisstochastic exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/CLG_3_1.fst 
1.27657e-05 0
make-h-transducer --disambig-syms-out=exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/disambig_tid.int --transition-scale=1.0 exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/ilabels_3_1 exp/dnn4_pretrain-dbn_dnn/tree exp/dnn4_pretrain-dbn_dnn/final.mdl 
fstdeterminizestar --use-log=true 
fsttablecompose exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/Ha.fst exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/CLG_3_1.fst 
fstrmsymbols exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/disambig_tid.int 
fstminimizeencoded 
fstrmepslocal 
fstisstochastic exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/HCLGa.fst 
0.000473932 -0.000484132
add-self-loops --self-loop-scale=0.1 --reorder=true exp/dnn4_pretrain-dbn_dnn/final.mdl 
steps/nnet/make_denlats.sh: generating denlats from data data-fmllr-tri3/train, putting lattices in exp/dnn4_pretrain-dbn_dnn_denlats
steps/nnet/make_denlats.sh: done generating denominator lattices.
steps/nnet/train_mpe.sh --cmd run.pl --gpu 1 --num-iters 6 --acwt 0.2 --do-smbr true data-fmllr-tri3/train data/lang exp/dnn4_pretrain-dbn_dnn exp/dnn4_pretrain-dbn_dnn_ali exp/dnn4_pretrain-dbn_dnn_denlats exp/dnn4_pretrain-dbn_dnn_smbr
Pass 1 (learnrate 0.00001)
 TRAINING FINISHED; Time taken = 2.88075 min; processed 6507.71 frames per second.
 Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
 Overall average frame-accuracy is 0.869688 over 1124823 frames.
Pass 2 (learnrate 1e-05)
 TRAINING FINISHED; Time taken = 2.87964 min; processed 6510.22 frames per second.
 Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
 Overall average frame-accuracy is 0.876663 over 1124823 frames.
Pass 3 (learnrate 1e-05)
 TRAINING FINISHED; Time taken = 2.89735 min; processed 6470.42 frames per second.
 Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
 Overall average frame-accuracy is 0.880725 over 1124823 frames.
Pass 4 (learnrate 1e-05)
 TRAINING FINISHED; Time taken = 2.85592 min; processed 6564.28 frames per second.
 Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
 Overall average frame-accuracy is 0.883655 over 1124823 frames.
Pass 5 (learnrate 1e-05)
 TRAINING FINISHED; Time taken = 2.89002 min; processed 6486.83 frames per second.
 Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
 Overall average frame-accuracy is 0.886018 over 1124823 frames.
Pass 6 (learnrate 1e-05)
 TRAINING FINISHED; Time taken = 2.8828 min; processed 6503.08 frames per second.
 Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
 Overall average frame-accuracy is 0.888026 over 1124823 frames.
MPE/sMBR training finished
Re-estimating priors by forwarding 10k utterances from training set.
steps/nnet/make_priors.sh --cmd run.pl --mem 4G --nj 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn_dnn_smbr
Accumulating prior stats by forwarding data-fmllr-tri3/train with exp/dnn4_pretrain-dbn_dnn_smbr
Succeeded creating prior counts exp/dnn4_pretrain-dbn_dnn_smbr/prior_counts from data-fmllr-tri3/train
steps/nnet/train_mpe.sh: Done. exp/dnn4_pretrain-dbn_dnn_smbr
steps/nnet/decode.sh --nj 20 --cmd run.pl --mem 4G --nnet exp/dnn4_pretrain-dbn_dnn_smbr/1.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/test exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it1
steps/nnet/decode.sh --nj 20 --cmd run.pl --mem 4G --nnet exp/dnn4_pretrain-dbn_dnn_smbr/1.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/dev exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it1
steps/nnet/decode.sh --nj 20 --cmd run.pl --mem 4G --nnet exp/dnn4_pretrain-dbn_dnn_smbr/6.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/test exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it6
steps/nnet/decode.sh --nj 20 --cmd run.pl --mem 4G --nnet exp/dnn4_pretrain-dbn_dnn_smbr/6.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/dev exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it6
Success
============================================================================
                    Getting Results [see RESULTS file]                    
============================================================================
%WER 31.9 | 400 15057 | 72.1 19.9 8.0 4.0 31.9 100.0 | -0.802 | exp/mono/decode_dev/score_4/ctm_39phn.filt.sys
%WER 24.6 | 400 15057 | 79.4 15.5 5.0 4.1 24.6 99.5 | -0.142 | exp/tri1/decode_dev/score_10/ctm_39phn.filt.sys %WER 23.0 | 400 15057 | 80.6 14.4 5.0 3.6 23.0 99.5 | -0.238 | exp/tri2/decode_dev/score_10/ctm_39phn.filt.sys %WER 20.0 | 400 15057 | 83.0 12.5 4.5 3.0 20.0 99.3 | -0.580 | exp/tri3/decode_dev/score_10/ctm_39phn.filt.sys %WER 23.3 | 400 15057 | 80.0 14.8 5.2 3.3 23.3 99.8 | -0.169 | exp/tri3/decode_dev.si/score_10/ctm_39phn.filt.sys %WER 20.8 | 400 15057 | 82.1 12.6 5.3 2.9 20.8 99.8 | -0.519 | exp/tri4_nnet/decode_dev/score_5/ctm_39phn.filt.sys
%WER 18.0 | 400 15057 | 84.4 11.1 4.5 2.4 18.0 99.3 | -0.150 | exp/sgmm2_4/decode_dev/score_10/ctm_39phn.filt.sys %WER 18.1 | 400 15057 | 85.4 11.2 3.4 3.5 18.1 98.5 | -0.389 | exp/sgmm2_4_mmi_b0.1/decode_dev_it1/score_7/ctm_39phn.filt.sys %WER 18.2 | 400 15057 | 84.5 11.2 4.3 2.7 18.2 99.0 | -0.155 | exp/sgmm2_4_mmi_b0.1/decode_dev_it2/score_10/ctm_39phn.filt.sys %WER 18.2 | 400 15057 | 84.9 11.2 3.9 3.1 18.2 99.0 | -0.231 | exp/sgmm2_4_mmi_b0.1/decode_dev_it3/score_9/ctm_39phn.filt.sys %WER 18.3 | 400 15057 | 84.5 11.3 4.2 2.8 18.3 99.0 | -0.170 | exp/sgmm2_4_mmi_b0.1/decode_dev_it4/score_10/ctm_39phn.filt.sys %WER 17.2 | 400 15057 | 85.4 10.5 4.1 2.6 17.2 99.5 | -1.107 | exp/dnn4_pretrain-dbn_dnn/decode_dev/score_4/ctm_39phn.filt.sys
%WER 17.2 | 400 15057 | 85.6 10.5 3.9 2.8 17.2 99.3 | -1.070 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it1/score_4/ctm_39phn.filt.sys %WER 17.1 | 400 15057 | 86.1 10.6 3.3 3.3 17.1 99.3 | -1.095 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it6/score_4/ctm_39phn.filt.sys
%WER 16.7 | 400 15057 | 86.1 10.9 3.0 2.8 16.7 98.8 | -0.124 | exp/combine_2/decode_dev_it1/score_6/ctm_39phn.filt.sys %WER 16.8 | 400 15057 | 85.8 11.0 3.3 2.6 16.8 99.0 | -0.025 | exp/combine_2/decode_dev_it2/score_7/ctm_39phn.filt.sys %WER 16.9 | 400 15057 | 86.0 11.0 3.0 2.8 16.9 98.8 | -0.121 | exp/combine_2/decode_dev_it3/score_6/ctm_39phn.filt.sys %WER 16.8 | 400 15057 | 85.8 11.0 3.2 2.7 16.8 98.8 | -0.035 | exp/combine_2/decode_dev_it4/score_7/ctm_39phn.filt.sys
%WER 32.0 | 192 7215 | 71.3 19.3 9.5 3.3 32.0 100.0 | -0.425 | exp/mono/decode_test/score_5/ctm_39phn.filt.sys
%WER 25.7 | 192 7215 | 77.9 16.8 5.4 3.5 25.7 100.0 | -0.105 | exp/tri1/decode_test/score_10/ctm_39phn.filt.sys %WER 24.3 | 192 7215 | 79.4 15.1 5.5 3.7 24.3 100.0 | -0.252 | exp/tri2/decode_test/score_10/ctm_39phn.filt.sys %WER 21.5 | 192 7215 | 81.6 13.5 4.9 3.1 21.5 99.5 | -0.583 | exp/tri3/decode_test/score_10/ctm_39phn.filt.sys %WER 24.1 | 192 7215 | 79.0 15.5 5.5 3.1 24.1 99.5 | -0.193 | exp/tri3/decode_test.si/score_10/ctm_39phn.filt.sys %WER 23.0 | 192 7215 | 80.1 13.7 6.2 3.0 23.0 99.5 | -0.481 | exp/tri4_nnet/decode_test/score_5/ctm_39phn.filt.sys %WER 19.4 | 192 7215 | 83.5 12.2 4.4 2.9 19.4 100.0 | -0.326 | exp/sgmm2_4/decode_test/score_8/ctm_39phn.filt.sys
%WER 19.6 | 192 7215 | 84.1 12.1 3.8 3.7 19.6 100.0 | -0.485 | exp/sgmm2_4_mmi_b0.1/decode_test_it1/score_7/ctm_39phn.filt.sys %WER 19.9 | 192 7215 | 83.5 12.4 4.1 3.4 19.9 99.5 | -0.345 | exp/sgmm2_4_mmi_b0.1/decode_test_it2/score_8/ctm_39phn.filt.sys %WER 20.0 | 192 7215 | 82.9 12.5 4.7 2.9 20.0 99.5 | -0.198 | exp/sgmm2_4_mmi_b0.1/decode_test_it3/score_10/ctm_39phn.filt.sys %WER 20.1 | 192 7215 | 83.5 12.5 4.0 3.5 20.1 99.5 | -0.365 | exp/sgmm2_4_mmi_b0.1/decode_test_it4/score_8/ctm_39phn.filt.sys
%WER 18.4 | 192 7215 | 83.8 11.2 5.0 2.2 18.4 99.5 | -0.607 | exp/dnn4_pretrain-dbn_dnn/decode_test/score_6/ctm_39phn.filt.sys %WER 18.4 | 192 7215 | 84.6 11.2 4.2 3.0 18.4 99.5 | -1.156 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it1/score_4/ctm_39phn.filt.sys %WER 18.4 | 192 7215 | 84.8 11.3 3.9 3.2 18.4 99.0 | -0.787 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it6/score_5/ctm_39phn.filt.sys
%WER 18.4 | 192 7215 | 84.5 12.0 3.5 2.9 18.4 99.5 | -0.103 | exp/combine_2/decode_test_it1/score_6/ctm_39phn.filt.sys %WER 18.3 | 192 7215 | 84.7 11.8 3.5 3.0 18.3 99.5 | -0.116 | exp/combine_2/decode_test_it2/score_6/ctm_39phn.filt.sys %WER 18.3 | 192 7215 | 84.8 11.8 3.4 3.1 18.3 99.5 | -0.121 | exp/combine_2/decode_test_it3/score_6/ctm_39phn.filt.sys %WER 18.2 | 192 7215 | 85.0 11.8 3.2 3.3 18.2 99.5 | -0.287 | exp/combine_2/decode_test_it4/score_5/ctm_39phn.filt.sys ============================================================================ Finished successfully on Fri Sep 15 17:39:16 CST 2017 ============================================================================

 

以上是关于kaldi的TIMIT实例三的主要内容,如果未能解决你的问题,请参考以下文章

kaldi的TIMIT实例三

KALDI运行yesno和TIMIT实例

KALDI运行yesno和TIMIT实例

KALDI运行yesno和TIMIT实例

KALDI运行yesno和TIMIT实例

Kaldi语料的两种切分/组织方式及其处理