I'm currently carrying out a project for a class. I need to carry out a multi-temporal classification of two landsat scenes from Central America (both the same R/P). After some pre-processing I am now ready to classify, and I would like to use the same training data for both images as they are both corrected for atmospheric affects. Is there a way I can select training data pixel by pixel from both images? For example, I have individually classified images in the past, but that makes change detection somewhat difficult, as I am unsure what changes are due to differences in classification parameters due to having different training data for each image's classification (I will be using an nearest means method).
I have been told that the only way to do this would be to find regions in both images that are unchanged, and apply the ROI's from one image to the other, but would prefer if there is a way to build training data that is then applied to both images for a simultaneous classification.
Another question, once ENVI has carried out change detection and produced contingency tables, is there a way I can instruct ENVI to turn the pixels detected as having changed, into change classes of their own? I have been told of one method that involves some arduous band math, but is there perhaps another and more streamlined method?
Thank you for any assistance.
|