http://www.xcavator.net/
http://www.fotosearch.com/
http://www.yangsky.com/products/picseer/index.htm
2008年1月17日 星期四
Some famous image search engines
http://www.faganfinder.com/img/
http://www.search-engine-index.co.uk/Images_Search/
http://images.google.com/
http://www.picsearch.com/
http://www.altavista.com/image/default
http://www.ask.com/?tool=img
http://www.exalead.com/image/results?q=
http://www.pixsy.com/
http://www.netvue.com/
http://www.airtightinteractive.com/projects/simple_image_search/app/
http://www.ithaki.net/images/
http://yotophoto.com/
http://www.search-engine-index.co.uk/Images_Search/
http://images.google.com/
http://www.picsearch.com/
http://www.altavista.com/image/default
http://www.ask.com/?tool=img
http://www.exalead.com/image/results?q=
http://www.pixsy.com/
http://www.netvue.com/
http://www.airtightinteractive.com/projects/simple_image_search/app/
http://www.ithaki.net/images/
http://yotophoto.com/
2008年1月1日 星期二
Learning from Small Number of Examples
In this project, there is a need that user may select several images as query images. I choose discrimintive model as our approach.
Based on the libsvm provided by Prof. Lin Chih-Jen, the algorithm is listed as below:
1. Read the checked images and un-checked images.
2. The checked images are regarded as positive examples. And the un-checked images and other images in dataset are regarded as pseudo-negative examples.
3. If the number of positive examples is N, then 2N pseudo-negative examples are generated.
4. For each loop do
- Construct a bag (training set) that contains N positive examples and 2N pseudo-negative examples.
- Choose the whole image dataset as testing set.
- Call svm_train to generate the model. Due to some parameters (g, C) are adjustable, I use the tools provided by Prof. Lin Chih-Jen to find out the best performance.
- Use the predict call to generate the classification results. The prob. parameter must be set to active.
- Collect the probability table.
5. Use MIN fusion to obtain the final result.
6. Filter out the positive examples in descending order and output to system.
Based on the libsvm provided by Prof. Lin Chih-Jen, the algorithm is listed as below:
1. Read the checked images and un-checked images.
2. The checked images are regarded as positive examples. And the un-checked images and other images in dataset are regarded as pseudo-negative examples.
3. If the number of positive examples is N, then 2N pseudo-negative examples are generated.
4. For each loop do
- Construct a bag (training set) that contains N positive examples and 2N pseudo-negative examples.
- Choose the whole image dataset as testing set.
- Call svm_train to generate the model. Due to some parameters (g, C) are adjustable, I use the tools provided by Prof. Lin Chih-Jen to find out the best performance.
- Use the predict call to generate the classification results. The prob. parameter must be set to active.
- Collect the probability table.
5. Use MIN fusion to obtain the final result.
6. Filter out the positive examples in descending order and output to system.
訂閱:
意見 (Atom)