Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
eyes:logics:classification [2017/06/02 13:15]
jschlie1 [Modes]
eyes:logics:classification [2017/06/06 17:32]
nfische [Concept]
Line 1: Line 1:
 ======Classification====== ======Classification======
-**Classification** or **cluster analysis** groups similar objects in distinct clusters (in the EM Field usually called **classes**) while minimizing the variance (of a certain parameter) within one cluster/​class. During image processing, classification is used to find similar 2D images within a dataset. ​+**Classification** or **cluster analysis** groups similar objects in distinct clusters (in the EM Field usually called **classes**) while minimizing the variance (of a certain parameter) within one cluster/​class. During image processing, classification is used to find similar 2D images within a dataset, but also similar 3D structures within a set of structures
  
 =====Usage===== =====Usage=====
-A prerequisite for a feasible **classification** is a complexity reduction of the given input dataset. This can be achieved by performing a [[:​eyes:​logics:​pca|principle component analysis (PCA)]]. The PCA can be carried out either internally by the classification-logic or beforehand by the respective PCA logic. Additionally,​ the user has to define the input set and the number of expected classes/​clusters. Now, the logic will split the dataset according to the information provided by the PCA into as many classes/​clusters as determined aiming for minimal variance within each class/​cluster.+A prerequisite for a feasible **classification** is a complexity reduction of the given input dataset. This can be achieved by performing a [[:​eyes:​logics:​pca]]. The PCA can be carried out either internally by the classification-logic or beforehand by the respective PCA logic. Additionally,​ the user has to define the input set and the number of expected classes/​clusters. Now, the logic will split the dataset according to the information provided by the PCA into as many classes/​clusters as determined aiming for an optimum of i) minimal variance within each class/​cluster ​and ii) a maximized signal-to-noise ratio.
  
 ===== Example ==== ===== Example ====
Line 14: Line 14:
 |< 100% 30% >| |< 100% 30% >|
 ^ Parameters ​               ^ Description ​    ^ ^ Parameters ​               ^ Description ​    ^
-| Eigen images location ​    | Define, whether the Eigen images used for complexity reduction are generated on-the-fly internally (intern) or provided by an input from en external source (extern) +| Eigen images location ​    | Define, whether the Eigen images used for complexity reduction are generated on-the-fly internally (intern) or provided by an input from en external source (extern) ​| 
-| -> Number of eigen images | How many Eigen images (and therefore dimensions) should be used as components during linear combination +| -> Number of eigen images | How many Eigen images (and therefore dimensions) should be used as components during linear combination ​| 
-| Split up method ​          | Determine, whether large classes should be slip up into smaller classes ​due to their number of containing objects ​(size) or due to their high internal variance ​of the cross-correlation-coefficients (cccVariance) |+| Split up method ​          | Determine, whether large classes should be split into smaller classes ​i) to obtain classes containing a similar ​number of images/​volumes ​(Cluster ​size) or ii) to minimize ​internal variance ​within each class, as measured by the cross-correlation-coefficients (cccVariance) |
 | Number of classes ​        | Number of resulting classes/​clusters | | Number of classes ​        | Number of resulting classes/​clusters |
-| Remove duplicated images ​ | FIXME ??? |+| Remove duplicated images ​ | Duplicate images identified by the classification are removed ​|
  
 |< 100% 30% >| |< 100% 30% >|
 ^ Input   ^ Description ^ ^ Input   ^ Description ^
 | Input   | Stack of input images | | Input   | Stack of input images |
-| PreEigen Images | (Only available, if //Eigen images location = extern//) Stack of Eigen images with the sum of all images and a mask as the last two images |+//Pre Eigen Images// | (Only available, if //Eigen images location = extern; i.e. Eigen images precomputed with [[eyes:​logics:​PCA]] logic//) Stack of Eigen images with the sum of all images and a mask as the last two images |
  
 |< 100% 30% >| |< 100% 30% >|
Line 34: Line 34:
 | ClassID | ID of the class/​cluster,​ the image belongs to | | ClassID | ID of the class/​cluster,​ the image belongs to |
  
-=====Concept===== +===== Concept ===== 
-Once clusters/​classes of images are found, they can be averaged (see [[eyes:​logics:​sumbyclassnumber]]) to improve the signal-to-noise-ration dramatically.((van Heel, M. (1984). Multivariate statistical classification of noisy images (randomly oriented biological macromolecules). Ultramicroscopy,​ 13(1-2), 165–83. Retrieved from http://​www.ncbi.nlm.nih.gov/​pubmed/​6382731))+Once clusters/​classes of images are found, they can be averaged (see [[eyes:​logics:​SumByClassNumber]]) to improve the signal-to-noise-ratio substantially.((van Heel, M. (1984). Multivariate statistical classification of noisy images (randomly oriented biological macromolecules). Ultramicroscopy,​ 13(1-2), 165–83. Retrieved from http://​www.ncbi.nlm.nih.gov/​pubmed/​6382731))