# Discriminative Shared Gaussian Process Latent Variable Model. 
**Implements the DS-GPLVM of Eleftheriadis et al [1]**
If you use the code please cite [1].


version: 0.9

## Author  
Stefanos Eleftheriadis

## Requirements
1. You need to have the gpml matlab package in your matlab path. You can download it at:
	http://gaussianprocess.org/gpml/code/matlab/release/gpml-matlab-v3.5-2014-12-08.zip
2. You also need the minFunc optimization package. It is available at:
	http://www.cs.ubc.ca/~schmidtm/Software/minFunc_2012.zip

## General Instructions
The model is capable of three different learning/inference schemes (only one can be active at a time):
	(a) Independent Back Projections (IBP)------> model.ibp = 1
	(b) Single Back Projections (SBP) ----------> model.sbp = 1
	(c) Standard Back Projections --------------> model.bp = 1 (Only for 1 view)

## Load data:
data_in: 	V-dimensional cell array that holds the data (NxD) from each view.
labels_full:	V-dimensional cell array that holds the labels of the input data from each view (should be the same for all views)
train_ind:	The indices of each view from data_in that will account as the training set.
val_ind:	The indices of each view from data_in that will account as the validation set (unseen during training of the model).
test_ind:	The indices of each view from data_in that will account as the test set (unseen during training of the model).

## Parameter tuning:
model.prior_type:	Possible values, 'lda', 'lpp'.
model.prior:		It controls the effect of the prior. You have to tune it yourself.
model.T:		Number of ADM cycles (Default value is 100).
***
* model.max_mu:		It affects the convergence of the ADM. The maximum value that the penalty parameter is allowed to take. Default value 1e3.
* model.rho:		It affects the convergence of the ADM, and controls how the penalty parameter \mu is updated. Typical values (1.1, 1.3, 1.7). 1.1 works the best
***

## Optimization:
In the functions "ready2min.m" and "update_bp_params.m" you may want to change the number of function evaluations during the optimization
that is controlled by:
options.maxFunEvals = 10; (default value is 10 in both functions)
More iterations result in better solution during each ADM cycle. This comes at a cost of time.


## OUTPUTS:
model.out:		The predictions of the DS-GPLVM
model.X:		The learned manifold
model.bp_parmas:	The learned parameters for the back projections
model.adm_parmas:	The learned parameters for the ADM
model.hyp:		The learend hyper-parameters for the generative mappings.


## Main Contents
1. demo_ds_gplvm.m:	Demo function for the DS-GPLVM.
2. gplvm.m:		Marginal likelihood and its gradients for the standard GPLVM.
3. ds_gplvm_fun.m:	Marginal likelihood and its gradients for the DS-GPLVM.
4. loo_krls_new.m:	Leave-One-Out solution for the kernel ridge regression of the back mappings.
5. ds_gplvm_adm.m:	The ADM routine for the learning of DS-GPLVM.
6. ds_gplvm_test.m:	Makes predictions for a DS-GPLVM.
7. initialize_dsgplvm.m:Initializes the DS-GPVLM.
8. initialize_params.m:	Initializes the back-projection and the ADM parameters.
9. update_adm_params.m: Updates the ADM parameters.
10. update_bp_params.m: Updates the back-projection parameters.
11. ready2min.m:	Prepares the data for the optimization of the DS-GPLVM.
12. model_classify.m:	Performs the classificaiton with the K-NN classifier
13. *_mod.m:		Modified versions of the corresponding functions from Rasmussen's gpml package.
14. Other secondary functions.


## References
[1] Eleftheriadis, Stefanos, Ognjen Rudovic, and Maja Pantic.
"Discriminative shared gaussian processes for multiview and view-invariant facial expression recognition."
IEEE Transactions on Image Processing. 24(1): pp. 189-204, 2015.
