November  2014

  
Data Reading:
    
    1. Faster PFiles preparation. A cross-validation set (~5%) is always selected. This eliminates the time-consuming steps  for
        PFile concatenating and splitting
    
    2. PDNN supports specification of a list of PFiles, for example, train.pfile.*.gz and train.pfile.[1-10].gz. No need to do PFile
        concatenation.
    
    3. PFiles are always compressed. This reduces PFiles to 1/10 of the original size
    
    4. Supports Python pickle files. Refer to examples/mnist to see how to prepare and specify pickle files.

Recipes & Architecture:
    
    1. PDNN supports multi-task learning, and this enables DNN training over multiple languages, domains, dialects, etc.
    
    2. Optimized CNN recipes; the CNN architecture is modifed to be the IBM style
    
    3. Maxout network models are added
    
    4. run_rm and run_hkust are removed (not supported anymore)
    
    5. run_tedlium is added

Results:
    
    1. Updated results on TIMIT



April 2014
   
More recipes, e.g., run-dnn-fbank+pitch.sh
    
2D (time x frequency) convolution; a faster version of CNN with the cuda-convnet  wrapper
    
Scripts are simplified, verified and now become more readable
    
For different datasets and with benchmark results