Difference between revisions of "Private:progress-dameh"

From NMSL
Line 1: Line 1:
 
= Summer 2011 (TA) =
 
= Summer 2011 (TA) =
 
* Research: Gram Matrix approximation
 
* Research: Gram Matrix approximation
** Recap:
+
=== May 5 ===
*** Problem: kernel methods gram matrix is O(N^2), which is computational and memory usage infeasible for large-scale datasets
+
* Recap  
*** Previous Work:
+
* Problem: kernel methods gram matrix is O(N^2), which is computational and memory usage infeasible for large-scale datasets
**** Low rank matrix approximation:
+
* Previous Work:
***** using EigneDecomposition: Optimal solution but requires O(N^3) computation to construct the low rank matrix
+
** Low rank matrix approximation:
***** Solution: use Nytsrom theorem to approximate the EigenDecomposition of the matrix which requires O(m^2N) where m << N.
+
*** using EigneDecomposition: Optimal solution
 +
**** Problem: requires O(N^3) computation to construct the low rank matrix
 +
**** Solution: use Nytsrom theorem to approximate the EigenDecomposition of the matrix which requires O(m^2N) where m << N.
  
 
***** Problem: In this methods in despite of reducing the computation complexity still we need O(N^2) to store the matrix, so the next methods reduce both computation and memory.
 
***** Problem: In this methods in despite of reducing the computation complexity still we need O(N^2) to store the matrix, so the next methods reduce both computation and memory.
  
**** Efficient implementation for computing the kernel matrix:
+
** Efficient implementation for computing the kernel matrix:
***** Using Spatial Indexing:
+
*** Using Spatial Indexing:
 
using Z-order, H-order, kd-tress and other spatial indexing techniques to order the points in the space and then using  sliding window where the kernel function computed only between points in that window.
 
using Z-order, H-order, kd-tress and other spatial indexing techniques to order the points in the space and then using  sliding window where the kernel function computed only between points in that window.
****** Problems: requires O(NlogN) as preprocessing to order the points, fails for very high dimensional Data and hard to parallel
+
**** Problems: requires O(NlogN) as preprocessing to order the points, fails for very high dimensional Data and hard to parallel
So we proposed a method that works fine for large-scale and highly dimensional data that overcomes all pervious works drawbacks, easy to parallel, requires linear time preprocessing and reduce both computation and memory usage down form quadratic to subquadratic:
+
**** Problems: We proposed a method that works fine for large-scale and highly dimensional data that overcomes all pervious works drawbacks, easy to parallel, requires linear time preprocessing and reduce both computation and memory usage down form quadratic to subquadratic:
***** Using Spatial Hashing or Locality sensitive hashing:
+
 
 +
*** Using Spatial Hashing or Locality sensitive hashing:
 
where the kernel function and the cluster methods works on the buckets individually.
 
where the kernel function and the cluster methods works on the buckets individually.
****** LSH families :
+
**** LSH families :
******* Random projection: Hamming distance, Euclidean Distance, Cosine Distance
+
***** Random projection: Hamming distance, Euclidean Distance, Cosine Distance
 
Problem:  They assume data uniformly distributed solution:
 
Problem:  They assume data uniformly distributed solution:
******* Spectral Hashing: overcomes previous families problmes
+
***** Spectral Hashing: overcomes previous families problmes
  
 
General Problem for random projection, it' doesn’t take in account if data are in a kernel space, Solution
 
General Problem for random projection, it' doesn’t take in account if data are in a kernel space, Solution
******* Kernelized LSH : Problem: more complex to compute
+
***** Kernelized LSH : Problem: more complex to compute
  
 
** In progress: Comparing different LSH families
 
** In progress: Comparing different LSH families

Revision as of 13:32, 6 May 2011

Summer 2011 (TA)

  • Research: Gram Matrix approximation

May 5

  • Recap
  • Problem: kernel methods gram matrix is O(N^2), which is computational and memory usage infeasible for large-scale datasets
  • Previous Work:
    • Low rank matrix approximation:
      • using EigneDecomposition: Optimal solution
        • Problem: requires O(N^3) computation to construct the low rank matrix
        • Solution: use Nytsrom theorem to approximate the EigenDecomposition of the matrix which requires O(m^2N) where m << N.
          • Problem: In this methods in despite of reducing the computation complexity still we need O(N^2) to store the matrix, so the next methods reduce both computation and memory.
    • Efficient implementation for computing the kernel matrix:
      • Using Spatial Indexing:

using Z-order, H-order, kd-tress and other spatial indexing techniques to order the points in the space and then using sliding window where the kernel function computed only between points in that window.

        • Problems: requires O(NlogN) as preprocessing to order the points, fails for very high dimensional Data and hard to parallel
        • Problems: We proposed a method that works fine for large-scale and highly dimensional data that overcomes all pervious works drawbacks, easy to parallel, requires linear time preprocessing and reduce both computation and memory usage down form quadratic to subquadratic:
      • Using Spatial Hashing or Locality sensitive hashing:

where the kernel function and the cluster methods works on the buckets individually.

        • LSH families :
          • Random projection: Hamming distance, Euclidean Distance, Cosine Distance

Problem: They assume data uniformly distributed solution:

          • Spectral Hashing: overcomes previous families problmes

General Problem for random projection, it' doesn’t take in account if data are in a kernel space, Solution

          • Kernelized LSH : Problem: more complex to compute
    • In progress: Comparing different LSH families


Spring 2011 (TA/RA)

  • Courses:
    • CMPT 885: Special Topics in Computer Architecture
    • CMPT 886: Special Topics in Operating Systems


Fall 2010 (FAS-GF/RA)

  • Courses:
    • CMPT 705: Design/Analysis Algorithms
    • CMPT 820: Multimedia Systems
  • Worked on Gram matrix approximation, High Performance/large scale application case studies


Summer 2010 (TA)

  • Worked on large Scale Kernel methods


Spring 2010 (RA)

  • Courses:
    • CMPT 768: Sound Synthesis
  • Worked on Locality sensitive hashing families and their applications and implementations