Difference between revisions of "Private:progress-neshat"

From NMSL
Line 4: Line 4:
  
 
=== Jan 17 ===
 
=== Jan 17 ===
* spent 3 days to figure out how to use Mark framework and run some samples, but couldn't understand. These works has been done:
+
* spent some days to figure out how to use Mark framework and run some samples, but couldn't understand. These works has been done:
 
** Configured system (windows) to run Mars, including cuda and SDK installation as well as VS9 configuring.
 
** Configured system (windows) to run Mars, including cuda and SDK installation as well as VS9 configuring.
 
** Corrected some typos in the code (library mismatching)
 
** Corrected some typos in the code (library mismatching)
Line 11: Line 11:
 
* Explored Mars to find its algorithm, and found in co-processing mode (Hybrid) they partition input data into two parts, one for CPU processing, the other for GPU processing. After the map stage, they merge data on CPU side, then dispatch data again to CPU workers and GPU workers.
 
* Explored Mars to find its algorithm, and found in co-processing mode (Hybrid) they partition input data into two parts, one for CPU processing, the other for GPU processing. After the map stage, they merge data on CPU side, then dispatch data again to CPU workers and GPU workers.
 
* Looked at phonix, another System for MapReduce Programming from Stanford. It was the comparison base for Mars.
 
* Looked at phonix, another System for MapReduce Programming from Stanford. It was the comparison base for Mars.
 
+
** Spent 2 days for writing resume and being prepared for YouTube interview.
  
  

Revision as of 20:18, 15 January 2011

Spring 2011 (GF)

  • Courses: None

worked on: Large Scale data processing with MapReduce on GPU/CPU hybrid systems

Jan 17

  • spent some days to figure out how to use Mark framework and run some samples, but couldn't understand. These works has been done:
    • Configured system (windows) to run Mars, including cuda and SDK installation as well as VS9 configuring.
    • Corrected some typos in the code (library mismatching)
    • Asking authors about problems, and got this answer: "I must apologize that mars_v2 is buggy and complex, and we don't maintain the code base any more, I strongly recommend you to try the latest version on linux"
    • tried to install mars_v2 on Linux, but it is still buggy and complex. It seems this frame work could run only with certaing configuration, and with older versions of CUDA.
  • Explored Mars to find its algorithm, and found in co-processing mode (Hybrid) they partition input data into two parts, one for CPU processing, the other for GPU processing. After the map stage, they merge data on CPU side, then dispatch data again to CPU workers and GPU workers.
  • Looked at phonix, another System for MapReduce Programming from Stanford. It was the comparison base for Mars.
    • Spent 2 days for writing resume and being prepared for YouTube interview.


Jan 10

  • Explored related works and potential ideas


Fall 2010 (TA)

  • Courses:
    • CMPT-820: Multimedia Systems
    • CMPT-825: NLP


  • worked on:
    • effective advertising in video
  • Submissions:
    • SmartAd: a smart autonomous system for effective advertising in video (ICME 11)


Summer 2010 (RA)

    • Writing for publication


  • worked on:
    • Estimating the click-through rate for new ads with semantic and feature based similarity

algorithms


Spring 2010 (RA)

  • Courses:
    • CMPT-886: Special topics in operation systems
  • worked on:
    • Accelarting online auction using GPU
    • Estimating the click-through rate for new ads with semantic and feature based similarity

algorithms

  • submitted
    • Accelerating online auctions with Optimized Parallel GPU based algorithms: Accelerating Vickrey-Clarke-Groves (VCG) Mechanism (proposal for GPU Gem book)


Fall 2009 (TA)

  • Courses:
    • CMPT-705: Algorithm
    • CMPT-771: Internet Architecture and Protocols


  • worked on:
    • implementing FEC on mobile tv testbed