<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-CA">
	<id>https://nmsl.cs.sfu.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kianoosh</id>
	<title>NMSL - User contributions [en-ca]</title>
	<link rel="self" type="application/atom+xml" href="https://nmsl.cs.sfu.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kianoosh"/>
	<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php/Special:Contributions/Kianoosh"/>
	<updated>2026-04-07T06:28:24Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.1</generator>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN&amp;diff=2900</id>
		<title>pCDN</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN&amp;diff=2900"/>
		<updated>2009-08-13T05:07:37Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Peer-assisted Content Distribution Network''' &lt;br /&gt;
&lt;br /&gt;
This project employs the peer-to-peer (P2P) computing paradigm in designing large-scale content distribution systems. The P2P paradigm provides: (i) improved scalability by aggregating resource contributions from peers (end user machines) and reducing the reliance on centralized servers, (ii) reduced cost by utilizing already-deployed resources and eliminating the need for expensive infrastructure, and (iii) rapid deployability by performing all processing at the end systems. &lt;br /&gt;
&lt;br /&gt;
Major content distribution networks, such as Akamai, consider the P2P paradigm as a real threat for their content distribution business. This is because the P2P paradigm may achieve similar services with a fraction of the cost. However, there are several research challenges that need to be addressed to enable the P2P paradigm to achieve this potential. In this research, we tackle these research challenges. Our goal is to develop a fully functional and reliable P2P content distribution system, which we call pCDN. Several steps have been made towards that goal. In fact, we already have a  beta version of pCDN 1.0.&lt;br /&gt;
&lt;br /&gt;
pCDN will provide high-quality multimedia content, support heterogeneous clients, impose minimal load on the expensive inter-ISP links, provide on-demand as well as live streaming services, ensure data integrity, implement digital rights management, among other features. All features are based on novel algorithms developed by our group. An overview of pCDN and its features can be found in this [http://www.cs.sfu.ca/~mhefeeda/Papers/pCDN07.pdf White Paper.] The white paper also summarizes the main differences between pCDN and common P2P file-sharing systems such as BitTorrent and Gnutella. &lt;br /&gt;
&lt;br /&gt;
pCDN is developed in partnership with the [http://www.cbc.ca Canadian Broadcasting Corporation] (CBC). CBC is the largest Internet content provider in Canada with millions of online users consuming a huge amount of bandwidth, which costs CBC millions of dollars each year. The objective of pCDN is to offset some of these costs while providing better streaming services to clients. pCDN 1.0 is currently in the final testing phases by CBC to be released to the public. Testing is being performed on small Internet streaming services, and the system will gradually evolve to larger-scale important services. &lt;br /&gt;
&lt;br /&gt;
Funding for the pCDN project is provided by a research grant from CBC, CRD and RTI grants from [http://www.nserc.ca NSERC], and a few [http://www.mitacs.ca MITACS] Research Internships. We appreciate their support. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* François Conway, (CBC, Senior Director, Technology, Strategy and Planning)&lt;br /&gt;
&lt;br /&gt;
* Bernard Jules (CBC,  Senior Project Manager, Internet and New Media Technology)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.sfu.ca/~cha16/ ChengHsin Hsu] (PhD student)&lt;br /&gt;
&lt;br /&gt;
* [http://www.sfu.ca/~aah10/ Ahmed Hamza] (PhD student)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian (MSc Student)]&lt;br /&gt;
&lt;br /&gt;
* Patrick Morin (CBC, Technical Support, Internet and New Media Technology)&lt;br /&gt;
&lt;br /&gt;
* Patrice Charbonneau (CBC, Technical Support, Internet and New Media Technology)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Vikas Kumar, Graduate (Research Assistant/Software Engineer, May -- July 2008)   [[Private:vikas_kumar|Progress Report]]&lt;br /&gt;
&lt;br /&gt;
* Nitin Chiluka (Research Assistant/Software Engineer, December 2007 -- May 2008)&lt;br /&gt;
&lt;br /&gt;
* Pouya Alagheband (NSERC Undergraduate Research Awards, Summer 2007) &lt;br /&gt;
&lt;br /&gt;
* Nicolas Gomez (NSERC Undergraduate Research Awards, Summer 2007)&lt;br /&gt;
&lt;br /&gt;
* Osama Saleh (MSc Student, Graduated Fall 2006)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== On-going Research Problems ==&lt;br /&gt;
&lt;br /&gt;
* [[Private:pCDN:Systems Issues|Systems Issues (Login Required)]]&lt;br /&gt;
&lt;br /&gt;
* [[Private:pCDN:Peer Matching| ISP-Friendly Peer Matching (Login Required)]]&lt;br /&gt;
&lt;br /&gt;
* [[Private:pCDN:NAT|Comprehensive NAT Traversal (Login Required)]]&lt;br /&gt;
&lt;br /&gt;
* [[Private:pCDN:DRM| Digital Rights Management (Login Required)]]&lt;br /&gt;
&lt;br /&gt;
* [[Private:pCDN:networkCoding| Using Network Coding in pCDN (Login Required)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications ==&lt;br /&gt;
&lt;br /&gt;
* C. Hsu, N. Chiluka, and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm08poster_abstract.pdf ISP-Friendly Peer Matching Algorithms], ACM SIGCOMM'08 Poster, Seattle, WA, August 2008.&lt;br /&gt;
&lt;br /&gt;
* Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap08_rd.pdf On the Accuracy and Complexity of Rate-Distortion Models for FGS-encoded Video Sequences], ACM Transactions on Multimedia Computing, Communications, and Applications, 4(2), Article 15, 22 Pages, May 2008.   &lt;br /&gt;
&lt;br /&gt;
* C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tom08b.pdf Partitioning of Multiple Fine-Grained Scalable Video Sequences Concurrently Streamed to Heterogeneous Clients], IEEE Transactions on Multimedia, 10(3), pp. 457--469, April 2008. &lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and C. Hsu, [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap08_fgs.pdf Rate-Distortion Optimized Streaming of Fine-Grained Scalable Video Sequences], ACM Transactions on Multimedia Computing, Communications, and Applications, 4(1), Article 2, 28 Pages, January 2008.   &lt;br /&gt;
&lt;br /&gt;
* C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tom08.pdf Optimal Coding of Multi-layer and Multi-version Video Streams], IEEE Transactions on Multimedia, 10(1), pp. 121--131, January 2008. &lt;br /&gt;
&lt;br /&gt;
* B. Jules and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/pCDN07.pdf pCDN: Peer-assisted Content Distribution Network], CBC/Radio-Canada Technology Review Magazine, Issue 4, pp. 1--14, July 2007. (Invited, also published in French).&lt;br /&gt;
&lt;br /&gt;
* Y. Tu, J. Sun, M. Hefeeda, Y. Xia, S. Prabhakar, [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap05.pdf An Analytical Study of Peer-to-Peer Media Streaming Systems], ACM Transactions on Multimedia Computing,  Communications, and Applications, 1(4),  pp. 354--376, November 2005.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda, A. Habib, D. Xu, B. Bhargava, B. Botev, [http://www.cs.sfu.ca/~mhefeeda/Papers/mmsj05.pdf CollectCast: A Peer-to-Peer Service for Media Streaming],  ACM/Springer Multimedia Systems Journal, 11(1), pp. 68--81, November 2005.&lt;br /&gt;
&lt;br /&gt;
* C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/iwqos07.pdf Structuring Multi-Layer Scalable Streams to Maximize Client-Perceived Quality], In Proc. of IEEE International Workshop on Quality of Service (IWQoS'07), pp. 182--187, Chicago, IL, June 2007.&lt;br /&gt;
&lt;br /&gt;
* C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/nossdav07.pdf Optimal Partitioning of Fine-Grained Scalable Video Streams], In Proc. of ACM International Workshop on Network and Operating Systems Support for Digital Audio &amp;amp; Video (NOSSDAV'07), pp. 63--68, Urbana-Champion, IL, June 2007.  &lt;br /&gt;
&lt;br /&gt;
* C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/mmcn07.pdf Optimal Bit Allocation for Fine-Grained Scalable Video Sequences in Distributed Streaming Environments], In Proc. of 14th ACM/SPIE Multimedia Computing and Networking Conference (MMCN'07), pp. 1--12, San Jose, CA, Jan 2007. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Release|Latest Released Version]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Installation|Installation Guide]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Feature|Features List]]&lt;br /&gt;
&lt;br /&gt;
* [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/pCDN Browse Source Code (Subver Server)]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Faq|FAQ]]: Please check the FAQ page before submitting a bug report.&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Bugreport|Howto Report a Bug]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:QA|Quality Assurance]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:develop|Developing new features for pCDN]]: We describe the tools and conventions used in software developments. &lt;br /&gt;
&lt;br /&gt;
== Documents ==&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Minutes|Meeting Minutes]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Testplan|Software Test Plan]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Emulator|Stress-test Emulator]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:License|Libraries Used and their Licenses]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Logfile|Log File]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Port|Port Assignment]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Backlog|Scrum Backlog]]&lt;br /&gt;
&lt;br /&gt;
* [[pCDN:Progress|Progress and Major Milestones]]&lt;br /&gt;
&lt;br /&gt;
* An out-dated [[media:pcdn_old_design.doc|design document]]. We are revising it.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Modeling_and_Caching_of_P2P_Traffic&amp;diff=2899</id>
		<title>Modeling and Caching of P2P Traffic</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Modeling_and_Caching_of_P2P_Traffic&amp;diff=2899"/>
		<updated>2009-08-13T05:07:15Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. The sheer volume and expected high growth of P2P traffic have negative consequences, including: (i) significantly increased load on the Internet backbone, hence, higher chances of congestion; and (ii) increased cost on Internet Service Providers (ISPs), hence, higher service charges for all Internet users. &lt;br /&gt;
&lt;br /&gt;
A potential solution for alleviating those negative impacts is to cache a fraction of the P2P traffic such that future requests for the same objects could be served from a cache in the requester’s autonomous system (AS). Caching in the Internet has mainly been considered for web and video streaming traffic, with little attention to the P2P traffic. Many caching algorithms for web traffic and for video streaming systems have been proposed and analyzed. Directly applying such algorithms to cache P2P traffic may not yield the best cache performance, because of the different traffic characteristics and caching objectives. For instance, reducing user-perceived access latency is a key objective for&lt;br /&gt;
web caches. Consequently, web caching algorithms often incorporate information about the cost (latency) of a cache miss when deciding which object to cache/evict. Although latency is important to P2P users, the goal of a P2P cache is often focused on the ISP’s primary concern; namely, the amount of bandwidth consumed by large P2P transfers. Consequently, the byte hit rate, i.e., the number of bytes served from the cache to the total number of transfered bytes, is more important than&lt;br /&gt;
latency. &lt;br /&gt;
&lt;br /&gt;
We are developing caching algorithms that capitalize on the P2P traffic characteristics. We are also exploring the potential of cooperative caching of P2P traffic, where multiple caches deployed in different  ASes (which could have a peering relationship) or within a large AS (e.g., a Tier-1 ISP) cooperate to serve traffic from each other`s clients. Cooperation reduces the load on expensive inter-ISP links. Furthermore, we are implementing all of our algorithms and ideas in a prototype caching system.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] &lt;br /&gt;
&lt;br /&gt;
* [http://www.sfu.ca/~cha16/ ChengHsin Hsu (PhD student)]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian (MSc Student)]&lt;br /&gt;
&lt;br /&gt;
* Behrooz Noorizadeh (MSc Student, Graduated Fall 2007)&lt;br /&gt;
&lt;br /&gt;
* Osama Saleh (MSc Student, Graduated Fall 2006)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications == &lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda, C. Hsu, and K. Mokhtarian, [http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm08demo_abstract.pdf pCache: A Proxy Cache for Peer-to-Peer Traffic], ACM SIGCOMM'08 Technical Demonstration, Seattle, WA, August 2008.  [[http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm08demo.pdf Poster: pdf]]  [[http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm08demo.ppt Poster: ppt]].&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and O. Saleh, Traffic Modeling and Proportional Partial Caching for Peer-to-Peer Systems, IEEE/ACM Transactions on Networking, Accepted October 2007.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and B. Noorizadeh, Cooperative Caching: The Case for P2P Traffic, In Proc. of IEEE Conference on Local Computer Networks (LCN'08), Montreal, Canada, October 2008.&lt;br /&gt;
&lt;br /&gt;
* O. Saleh and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/icnp06.pdf Modeling and Caching of Peer-to-Peer Traffic] , In Proc. of IEEE International Conference on Network Protocols (ICNP'06), pp. 249--258, Santa Barbara, CA, November 2006.   (Acceptance: 14%)  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== pCache Software ==&lt;br /&gt;
[[Image:caching.jpg|frame|right|P2P Caching]]&lt;br /&gt;
We have designed and implemented a proxy cache for P2P traffic, which we call&lt;br /&gt;
pCache. pCache is to be deployed by autonomous systems (ASes) or ISPs that&lt;br /&gt;
are interested in reducing the burden of P2P traffic. pCache would be deployed&lt;br /&gt;
at or near the gateway router of an AS.  At a high-level, a client&lt;br /&gt;
participating in a particular P2P network issues a request to download an&lt;br /&gt;
object. This request is intercepted by pCache. If the requested object or parts&lt;br /&gt;
of it are stored in the cache, they are served to the requesting client. This&lt;br /&gt;
saves bandwidth on the external (expensive) links to the Internet.  If a part&lt;br /&gt;
of the requested object is not found in the cache, the request is forwarded to&lt;br /&gt;
the P2P network. When the response comes back, pCache may store a copy of the&lt;br /&gt;
object for future requests from other clients in its AS. Clients inside the AS&lt;br /&gt;
as well as external clients are not aware of pCache, i.e., pCache is&lt;br /&gt;
fully-transparent in both directions.&lt;br /&gt;
&lt;br /&gt;
Our C++ implementation of pCache has more than 11,000 lines of code. We have&lt;br /&gt;
rigorously validated and evaluated the performance of pCache as well as its&lt;br /&gt;
impacts on ISPs and clients.  Our experimental results show that pCache&lt;br /&gt;
benefits both the clients and the ISP in which the cache is deployed, without&lt;br /&gt;
hurting the performance of the P2P networks.  Specifically, clients behind the&lt;br /&gt;
cache achieve much higher download speeds than other clients running in the&lt;br /&gt;
same conditions without the cache.  In addition, a significant portion of the&lt;br /&gt;
traffic is served from the cache, which reduces the load on the expensive WAN&lt;br /&gt;
links for the ISP.  Our results also show that the cache does not reduce the&lt;br /&gt;
connectivity of clients behind it, nor does it reduce their upload speeds.&lt;br /&gt;
This is important for the whole P2P network, because reduced connectivity could&lt;br /&gt;
lead to decreased availability of peers and the content stored on them. &lt;br /&gt;
&lt;br /&gt;
The following figure demonstrates the main components of pCache.  A brief description &lt;br /&gt;
of each component is given in [[pCacheOverview|pCache overview page]].  &lt;br /&gt;
Detailed design and performance evaluations are presented in &lt;br /&gt;
[http://nsl.cs.sfu.ca/papers/conext08_tr.pdf  our technical report]. &lt;br /&gt;
[[Image:pCache-design.jpg|center|Components]]&lt;br /&gt;
&lt;br /&gt;
=== Browse and Download Code === &lt;br /&gt;
&lt;br /&gt;
We are continuously improve our pCache implementation. The latest development branch can be browsed on our subversion page at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/p2pcache our subversion server] &lt;br /&gt;
&lt;br /&gt;
pCache code is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3] in two parts: kernel source and application. The Linux&lt;br /&gt;
kernel consists of all required patches to support transparent proxy, which&lt;br /&gt;
simplifies setting up required environment. This patched kernel contains code&lt;br /&gt;
from [http://www.kernel.org/ mainstream Linux kernel], [http://www.netfilter.org/ netfilter], &lt;br /&gt;
[http://www.balabit.com/support/community/products/tproxy/ tproxy], and [http://www.linux-l7sw.org/ layer7switch]. The pCache application implements components described above. Moreover, a patched iptables is also provided that takes additional arguments supported by [http://www.balabit.com/support/community/products/tproxy/ tproxy].&lt;br /&gt;
&lt;br /&gt;
* Linux Kernel [[media:linux-2.6.23.tgz]]&lt;br /&gt;
&lt;br /&gt;
* pCache Snapshot [[media:pCache-0.0.1.tgz]]&lt;br /&gt;
&lt;br /&gt;
* iptables (patched for additional araguments) [[media:iptables-1.4.0rc1.tgz]]&lt;br /&gt;
&lt;br /&gt;
To setup your pCache system, please follow these simple steps:&lt;br /&gt;
&lt;br /&gt;
* Download the Linux kernel, compile and install it. Notice that, this tar file also include a sample .config file.&lt;br /&gt;
&lt;br /&gt;
* Download the patched iptables, compile and install it. (Please see the INSTALL file included in the tar file for installation instruction)&lt;br /&gt;
&lt;br /&gt;
* Download the pCache source code, compile it for a binary called pCache.&lt;br /&gt;
&lt;br /&gt;
To run pCache, first configure forwarding table. For example, we use the following script to configure our forwarding table:&lt;br /&gt;
&lt;br /&gt;
 iptables -t mangle -N DIVERT&lt;br /&gt;
 # bypass low ports&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp --sport 1:1024 -j ACCEPT&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp --dport 1:1024 -j ACCEPT&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp -j TPROXY --tproxy-mark 0x1/0x1 --on-port 7072&lt;br /&gt;
 iptables -t mangle -A DIVERT -j MARK --set-mark 1&lt;br /&gt;
 iptables -t mangle -A DIVERT -j ACCEPT&lt;br /&gt;
 ip rule add fwmark 1 lookup 100&lt;br /&gt;
 ip route add local 0.0.0.0/0 dev lo table 100&lt;br /&gt;
 # disable TCP features that are not suppoert by the TCP splicing module&lt;br /&gt;
 sysctl -w net.ipv4.tcp_sack=0&lt;br /&gt;
 sysctl -w net.ipv4.tcp_dsack=0&lt;br /&gt;
 sysctl -w net.ipv4.tcp_window_scaling=0&lt;br /&gt;
 #sysctl -w net.ipv4.tcp_tw_recycle=1&lt;br /&gt;
 #echo 8 &amp;gt; /proc/sys/kernel/printk&lt;br /&gt;
&lt;br /&gt;
Then, configure the conf.txt file under the pCache directory. See comments in&lt;br /&gt;
conf.txt for purpose of each setting. The most important ones are briefly&lt;br /&gt;
described below. We notice that many settings are for experimental usage and&lt;br /&gt;
are not useful in actual deployment.&lt;br /&gt;
# BLOCK_SIZE and BLOCK_NUM determine the layout of harddisk as well as the capacity. The resulted harddisk size should never exceed the actual size.&lt;br /&gt;
# ACCESOR_TYPE describes the harddisk scheme that will be used. The following harddisk schemes are supports: flat directory (1), signal file on file system (4), and signal file on raw disk (5). Other types are experimental only. &lt;br /&gt;
# ROOT_DIR or DEV_NAME defines the location of file system. DEV_NAME is used for raw disk harddisk scheme, and ROOT_DIR is for all other. Examples of DEV_NAME include /dev/hda2 and /dev/sda1. Examples of ROOT_DIR include /mnt/pCache, which will need to be mounted first.&lt;br /&gt;
# SUBNET and NETMASK define the local subnet. pCache only inspect outgoing requests. The incoming requests are always forwarded.&lt;br /&gt;
# MAX_FILLED_SIZE and MIN_FILLED_SIZE determine when to call cache replacement routine and when to stop.&lt;br /&gt;
&lt;br /&gt;
After setting up the conf.txt file, run pCache in command-line. A log.txt will be generated for detailed debugging messages. pCache also provides a Web-based monitoring interface, which can be access by connecting to ''http://&amp;lt;your-up&amp;gt;:8000'' using any Web browser. The interface looks like:&lt;br /&gt;
[[Image:pCache_Web.jpg|center|border|576px]]&lt;br /&gt;
Meanwhile, pCache also includes an applet to report real-time events. This applet can be launched by clicking the ''Details'' button. The applet looks like:&lt;br /&gt;
[[Image:pCache_Applet.jpg|center|border|402px]]&lt;br /&gt;
&lt;br /&gt;
=== Future Enhancements  ===&lt;br /&gt;
&lt;br /&gt;
The following is list of possible improvements to pCache. The items are roughly listed by priority. We also list the&lt;br /&gt;
person-day estimation on each item, considering a good graduate student who works 8 hours a day. The estimations include unit-tests.&lt;br /&gt;
&lt;br /&gt;
#Nonvolatile storage system: To keep cached data across reboots, we need to write in-memory metainfo to the super blocks (in pCache file systems).  This require save(...)/restore(...) functions for a few hashtables in-memory.  This task needs 4~5 ppl-day.&lt;br /&gt;
#Proxy performance: We should integrate our code with the latest tproxy for better performance (we can ignore the tcp splicing part for now). After integration, we should quantify the performance of tproxy (by emulating a large number of P2P clients in two private subnets). If possible we can identify the bottlenecks in tproxy, and improve it. We then can contribute the code back to the community. This can be a small/side research project. TProxy integration takes 1 ppl-day. Designing/Implementing the emulation and getting a write-up on comparison and bottleneck analysis takes 5~10 ppl-day.&lt;br /&gt;
#Event-driven connection manager: We should define a stateful connection class, rewrite the connection manager into an event handler, use epoll (for network) and aio (for disk) to improve scalability. Finally, a test similar to the one in TProxy test should be performed. Designing it takes 4 ppl-day. Implementing it takes 8~10 ppl-day. Evaluating it takes 3 ppl-day, assuming we have gained experiences from evaluating TProxy.&lt;br /&gt;
#Simpler segment matching: For every incoming request, we either request it in its entirety or we don't request it at all. Current partial request code is over-complicated.This takes 1 ppl-day, but may depend on (overlap with) event-driven connection manager.&lt;br /&gt;
#Improve the compatibility: Identify the unsupported BT/Gnutella clients, and locate the root causes (which message types cause the problem). Then fix it.  I imagine that this will take some time. I cannot come up with a time estimation as of now.&lt;br /&gt;
#Better logging system: We currently use a home-made logging system, but in an inconsistent way: some modules log through stderr rather than the logging system. If time permitted, we may switch to an open-source logging library similar to log4c. This takes 5~7 ppl-day, given that there are many logging statements in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Feedback and Comments ===&lt;br /&gt;
&lt;br /&gt;
We welcome all comments and suggestions. You can enter your comments [http://www.sfu.ca/~cha16/feedback.html here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Related Caching Systems and Commercial Products ===&lt;br /&gt;
&lt;br /&gt;
* [[http://www.oversi.com/index.php?option=com_content&amp;amp;task=view&amp;amp;id=38&amp;amp;Itemid=114 OverCache P2P Caching and Delivery Platform]] Oversi's MSP platform realizes multi-service caching for P2P and other applications. In terms of P2P caching, MSP takes a quite different approach than pCache: An MSP device actively participates in P2P networks. That is, MSP acts as a ultra-peer that only serve peers within the deployed ISP. We believe this approach negatively impacts fairness in many P2P networks, such as BitTorrent, which employ algorithms to eliminate free-rider problem. In fact, no peers in ISPs with Oversi's MSP deployed will ever upload anymore, because they expect to get the data free from the MSP platform. Once number of free-riders increases, the P2P network performance degrades, which in turns affects P2P users all over the world.&lt;br /&gt;
&lt;br /&gt;
* [[http://www.peerapp.com/products-ultraband.aspx PeerApp UltraBand Family]] Unlike OverCache, PeerApp's products support transparent caching of P2P traffic. Supported P2P protocols are BitTorrent, Gnutella, eDonkey, and FastTrack (the last two are no more popular). However, like OverCache, PeerApp's products do not support cross-protocol caching; a file cached through a BitTorrent download will not be served to a Gnutella user requesting the same file (or vice versa). Currently, we have provided basic means for supporting cross-protocol caching, and this feature will be fully added to the next version of our prototype software.&lt;br /&gt;
&lt;br /&gt;
Not to mention that our prototype software is open-source, while the above products are commercial and very expensive.&lt;br /&gt;
&lt;br /&gt;
=== On-going Research Problems===&lt;br /&gt;
&lt;br /&gt;
We list the current research problems and draft solutions [[Private:pCache_progress|a separate document (login required)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== P2P Traffic Traces == &lt;br /&gt;
&lt;br /&gt;
* If you are interested in the traces, please send us an email along with a brief description of your research and the university/organization you are affiliated with. Brief description of our traces can be found in this [http://nsl.cs.surrey.sfu.ca/projects/p2p/traces_readme.txt readme.txt] file.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2771</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2771"/>
		<updated>2009-07-02T00:57:04Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. Multimedia content, however, is often distributed over open and insecure networks, such as the Internet. Accordingly, secure delivery of multimedia streams is an important and critical problem. Secure delivery means authenticating multimedia streams so that all receivers can ensure that the content is original and is not tampered with by any attacker.&lt;br /&gt;
&lt;br /&gt;
Various challenges need to be dealt with for this purpose. First, the authentication mechanism, which can be computationally expensive, has to keep up with the online nature of the streams. Second, media content is often distributed over unreliable channels, where packet losses are not uncommon. The authentication scheme needs to function properly even in the presence of these losses. Third, media streams can be encoded in scalable (or layered) manner to accommodate heterogeneous clients and varying network conditions. In this case, the authentication scheme has to successfully verify any substream extracted from the original stream. Finally, the authentication information added to the streams should be minimized in order to avoid increasing the already-high storage and network bandwidth requirements for multimedia content.&lt;br /&gt;
&lt;br /&gt;
We investigate these challenges for authentication of scalable video streams in a computationally efficient manner, with low delay and communication overhead, and high resilience against packet losses. Our main focus is on scalable videos encoded using the state-of-the-art video coding standard H.264/SVC, the Scalable Video Coding (SVC) extension of H.264/AVC video coding technique. H.264/SVC offers great flexibiliy while incurring much lower overheads compared to classic scalable coding techniques. We have designed an authentication scheme for H.264/SVC streams that supports its full flexibility: it takes into account the coding characteristics of H.264/SVC scalability model and enables verification of all possible substreams. In addition, the proposed scheme is designed for end-to-end authentication of streams. In an end-to-end authentication procedure, a content provider prepares the authenticated video and sends it to receivers, possibly through a third-party Content Delivery Network (CDN) with proxy servers that may need to adapt the flexible video streams. These proxies or any&lt;br /&gt;
other entity involved in the delivery process do not have to understand our authentication scheme, which is an important advantage of the proposed scheme.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications ==&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, Authentication Schemes for Multimedia Streams: Quantitative Analysis and Comparison, ''ACM Transactions on Multimedia Computing, Communications, and Applications'', Accepted January 2009.&lt;br /&gt;
&lt;br /&gt;
* K. Mokhtarian and M. Hefeeda,  End-to-End Secure Delivery of Scalable Video Streams, In Proc. of International workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV'09), pages 79-84, Williamsburg, VA, June 2009.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, [http://www.cs.sfu.ca/~mhefeeda/Papers/pv09.pdf Analysis of Authentication Schemes for Nonscalable Video Streams], In Proc. of IEEE International Packet Video Workshop (PV'09), 10 pages, Seattle, WA, May 2009.  Slides [ [http://www.cs.sfu.ca/~mhefeeda/Talks/pv09.pptx ppt] ] [ [http://www.cs.sfu.ca/~mhefeeda/Talks/pv09.pdf pdf] ] &lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, Authentication of Scalable Multimedia Streams, Book Chapter in Handbook on Security and Networks, World Scientific Publishing Co., To appear in Summer 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
&lt;br /&gt;
* '''[[svcAuth]]''': A Library for Authenticating H.264/SVC Video Streams&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2756</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2756"/>
		<updated>2009-06-23T20:24:02Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth'', and is implemented in Java to support portability across different platforms. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|534px]]&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream_Parser module, which extracts NAL (H.264's Network Abstraction Layer) units from the bitstream, parses their headers, and delivers them as logical objects to the SVC_Reader module. The SVC_Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC_Reader module outputs a logical view of the stream as GoPs (Group-of-Pictures), frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC_Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC_Auth module. The SVC_Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC_Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC_Writer module. The SVC_Writer module converts back the logical structure to an SVC bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use Supplemental Enhancement Information (SEI) NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream_Parser and SVC_Reader modules and reaches a module called SVC_Verif. SVC_Verif proceeds in a similar way to SVC_Auth: it recomputes all required hash digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC_Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Resources ==&lt;br /&gt;
&lt;br /&gt;
* [http://ip.hhi.de/imagecom_G1/savce/downloads/SVC-Reference-Software.htm JSVM (Joint Scalable Video Model) reference software for SVC]&lt;br /&gt;
&lt;br /&gt;
* [http://nsl.cs.sfu.ca/wiki/index.php/Video_Library_and_Tools A set of video libraries and tools on our page] (including tools for working with YUV video files, which is the format of raw input/output of the JSVM reference software)&lt;br /&gt;
&lt;br /&gt;
* [http://www.svc-analyzer.com An SVC Analyzer software]&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?&amp;amp;arnumber=4317636 Overview paper on H.264/SVC by H. Schwarz, D. Marpe, and T. Wiegand]&lt;br /&gt;
&lt;br /&gt;
* [http://ip.hhi.de/imagecom_G1/savce/index.htm An interesting presentation of H.264/SVC by HHI Institute]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2755</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2755"/>
		<updated>2009-06-23T20:03:42Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth'', and is implemented in Java to support portability across different platforms. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|534px]]&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream_Parser module, which extracts NAL (H.264's Network Abstraction Layer) units from the bitstream, parses their headers, and delivers them as logical objects to the SVC_Reader module. The SVC_Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC_Reader module outputs a logical view of the stream as GoPs (Group-of-Pictures), frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC_Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC_Auth module. The SVC_Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC_Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC_Writer module. The SVC_Writer module converts back the logical structure to an SVC bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use Supplemental Enhancement Information (SEI) NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream_Parser and SVC_Reader modules and reaches a module called SVC_Verif. SVC_Verif proceeds in a similar way to SVC_Auth: it recomputes all required hash digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC_Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2754</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2754"/>
		<updated>2009-06-23T19:57:30Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth'', and is implemented in Java to support portability across different platforms. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|534px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2753</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2753"/>
		<updated>2009-06-23T19:53:52Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth''. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|534px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2752</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2752"/>
		<updated>2009-06-23T19:49:03Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth''. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2751</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2751"/>
		<updated>2009-06-23T19:34:37Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth''. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|456px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border|450px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2750</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2750"/>
		<updated>2009-06-23T19:33:27Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented a library for authenticating scalable video streams encoded using the state-of-the-art H.264/SVC video coding standard, the Scalable Video Coding (SVC) extension of H.264/AVC standard; the library can thus authenticate H.264/AVC streams as well. The library is called ''svcAuth''. svcAuth supports the full flexibility of H.264/SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|456px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border|450px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2749</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2749"/>
		<updated>2009-06-23T19:12:53Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest version of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|456px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border|450px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2748</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2748"/>
		<updated>2009-06-23T19:11:06Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project and is continuously improved. The latest release of svcAuth can be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system].&lt;br /&gt;
&lt;br /&gt;
The latest release of svcAuth can also be downloaded as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. &lt;br /&gt;
&lt;br /&gt;
svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|456px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border|450px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2747</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2747"/>
		<updated>2009-06-23T19:04:37Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
Download svcAuth as a single zipped file from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is an ongoing project its software is continuously improved. The latest version of svcAuth can also be accessed through [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion system]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|456px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border|450px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2746</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2746"/>
		<updated>2009-06-23T07:33:25Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project its software is continuously improved. The latest version of svcAuth can be accessed through our subversion system at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion server]. svcAuth as a single zipped file can also be downloaded from [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz here]. svcAuth is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction to svcAuth ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, as shown in the following figure, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player.&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_placement.png|center|border|456px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent.&lt;br /&gt;
&lt;br /&gt;
svcAuth is available as an open-source library implemented in Java, to support portability across different platforms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview of svcAuth Architecture ==&lt;br /&gt;
&lt;br /&gt;
The svcAuth authentication module, which is used at the content provider side, is shown below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:svcAuth_auth_module.png|center|border|450px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This module is placed after the video encoding process and before transmission, and operates as follows. The video stream is first parsed by the Stream Parser module, which extracts NAL units from the bitstream, parses their headers, and delivers them as logical objects to the SVC Reader module. The SVC Reader module determines the structure of the SVC stream using the NAL units. For this purpose, as shown in the figure, it needs to buffer a number of NAL units, e.g., to determine the last NAL unit of the current video frame which is done by detecting the first NAL unit of the next frame. The SVC Reader module outputs a logical view of the stream as GoPs, frames, and different types of layers. We refer to these entities as SVC Elements.&lt;br /&gt;
&lt;br /&gt;
Each SVC Element in the logical view returned by SVC Reader contains an array of authentication information messages, which is initially empty. These arrays are filled by the SVC Auth module. The SVC Auth module takes as input a block of ''n'' GoPs, computes the required authentication information, and adds them to the SVC Elements of those ''n'' GoPs. The output of SVC Auth, which is the same set of GoPs as the input with authentication information added, is delivered to the SVC Writer module. The SVC Writer module converts back the logical structure to a raw bitstream. This is done by encapsulating the authentication information as appropriate NAL units and inserting them to the original bitstream. We use SEI NAL units (NAL unit type 6) of H.264/SVC for this purpose. An SEI NAL unit can contain one or more SEI Messages. To attach some information to a specific layer, we embed these information in an Unregistered User Data SEI message, relate it to the desired temporal/spatial/quality layer by encapsulating (nesting) it in a Scalable Nesting SEI Message, and finally encapsulate the result in an SEI NAL unit.&lt;br /&gt;
&lt;br /&gt;
The svcAuth verification module operates similar to the authentication module with minor differences. The received substream first goes through Stream Parser and SVC Reader modules and reaches a module called SVC Verif. SVC Verif proceeds in a similar way to SVC Auth: it recomputes spatial layer, frame, GoP, and block digests from the reconstructed video, and compares them to the digests provided as the authentication information. In case of any mismatch, the mismatching part of data, such as a video frame, is marked as unauthentic and is discarded. The remaining parts are known as authentic if and only if the digital signature of the corresponding GoP block is successfully verified. The output of SVC Verif is sent to the receiver application for playback.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=File:svcAuth_auth_module.png&amp;diff=2745</id>
		<title>File:svcAuth auth module.png</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=File:svcAuth_auth_module.png&amp;diff=2745"/>
		<updated>2009-06-23T07:29:23Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=File:svcAuth_placement.png&amp;diff=2744</id>
		<title>File:svcAuth placement.png</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=File:svcAuth_placement.png&amp;diff=2744"/>
		<updated>2009-06-23T07:00:24Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=File:svcAuth_placement.jpg&amp;diff=2743</id>
		<title>File:svcAuth placement.jpg</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=File:svcAuth_placement.jpg&amp;diff=2743"/>
		<updated>2009-06-23T06:58:52Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2742</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2742"/>
		<updated>2009-06-23T06:05:30Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. Multimedia content, however, is often distributed over open and insecure networks, such as the Internet. Accordingly, secure delivery of multimedia streams is an important and critical problem. Secure delivery means authenticating multimedia streams so that all receivers can ensure that the content is original and is not tampered with by any attacker.&lt;br /&gt;
&lt;br /&gt;
Various challenges need to be dealt with for this purpose. First, the authentication mechanism, which can be computationally expensive, has to keep up with the online nature of the streams. Second, media content is often distributed over unreliable channels, where packet losses are not uncommon. The authentication scheme needs to function properly even in the presence of these losses. Third, media streams can be encoded in scalable (or layered) manner to accommodate heterogeneous clients and varying network conditions. In this case, the authentication scheme has to successfully verify any substream extracted from the original stream. Finally, the authentication information added to the streams should be minimized in order to avoid increasing the already-high storage and network bandwidth requirements for multimedia content.&lt;br /&gt;
&lt;br /&gt;
We investigate these challenges for authentication of scalable video streams in a computationally efficient manner, with low delay and communication overhead, and high resilience against packet losses. Our main focus is on scalable videos encoded using the state-of-the-art video coding standard H.264/SVC, the Scalable Video Coding (SVC) extension of H.264/AVC video coding technique. H.264/SVC offers great flexibiliy while incurring much lower overheads compared to classic scalable coding techniques. We have designed an authentication scheme for H.264/SVC streams that supports its full flexibility: it takes into account the coding characteristics of H.264/SVC scalability model and enables verification of all possible substreams. In addition, the proposed scheme is designed for end-to-end authentication of streams. In an end-to-end authentication procedure, a content provider prepares the authenticated video and sends it to receivers, possibly through a third-party Content Delivery Network (CDN) with proxy servers that may need to adapt the flexible video streams. These proxies or any&lt;br /&gt;
other entity involved in the delivery process do not have to understand our authentication scheme, which is an important advantage of the proposed scheme.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications ==&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, Authentication Schemes for Multimedia Streams: Quantitative Analysis and Comparison, ''ACM Transactions on Multimedia Computing, Communications, and Applications'', Accepted January 2009.&lt;br /&gt;
&lt;br /&gt;
* K. Mokhtarian and M. Hefeeda,  End-to-End Secure Delivery of Scalable Video Streams, In Proc. of International workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV'09), 6 pages, Williamsburg, VA, June 2009.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, [http://www.cs.sfu.ca/~mhefeeda/Papers/pv09.pdf Analysis of Authentication Schemes for Nonscalable Video Streams], In Proc. of IEEE International Packet Video Workshop (PV'09), 10 pages, Seattle, WA, May 2009.  Slides [ [http://www.cs.sfu.ca/~mhefeeda/Talks/pv09.pptx ppt] ] [ [http://www.cs.sfu.ca/~mhefeeda/Talks/pv09.pdf pdf] ] &lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, Authentication of Scalable Multimedia Streams, Book Chapter in Handbook on Security and Networks, World Scientific Publishing Co., To appear in Summer 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''[[svcAuth]]''': A Library for Authenticating H.264/SVC Video Streams&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2741</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2741"/>
		<updated>2009-06-22T12:40:31Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Downloads ==&lt;br /&gt;
* [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth Browse svcAuth source code through our subversion system]&lt;br /&gt;
** [http://nsl.cs.sfu.ca/resources/svcAuth.tar.gz Download svcAuth library]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player. Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent. Our current implementation of these modules is available as an open-source library, which is implemented in Java for easy portability to various platforms. By using the svcAuth library, all users with anytime-anywhere demand for multimedia streams can always ensure that the content they watch is original and has not gone under any malicious manipulation.&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project its software is continuously improved. The latest version of the svcAuth software can be accessed through our subversion system at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion server]. It is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2740</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2740"/>
		<updated>2009-06-20T06:31:57Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth Browse source code through our subversion system]&lt;br /&gt;
** [http://nsl.cs.sfu.ca/resources/svcAuth.zip Download the library]&lt;br /&gt;
** Note: version 1.0 (the first release) is now available, and the next version with major improvements is going to be released on June 21st.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player. Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent. Our current implementation of these modules is available as an open-source library, which is implemented in Java for easy portability to various platforms. By using the svcAuth library, all users with anytime-anywhere demand for multimedia streams can always ensure that the content they watch is original and has not gone under any malicious manipulation.&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project its software is continuously improved. The latest version of the svcAuth software can be accessed through our subversion system at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion server]. It is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2738</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2738"/>
		<updated>2009-06-16T22:30:39Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth Browse source code through our subversion system]&lt;br /&gt;
** [http://nsl.cs.sfu.ca/resources/svcAuth.zip Download the library]&lt;br /&gt;
** Note: version 1.0 (the first release) is now available, and the next version with major improvements is to be released by June 20th.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player. Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent. Our current implementation of these modules is available as an open-source library, which is implemented in Java for easy portability to various platforms. By using the svcAuth library, all users with anytime-anywhere demand for multimedia streams can always ensure that the content they watch is original and has not gone under any malicious manipulation.&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project its software is continuously improved. The latest version of the svcAuth software can be accessed through our subversion system at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion server]. It is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2733</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2733"/>
		<updated>2009-06-10T01:46:11Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player. Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent. Our current implementation of these modules is available as an open-source library, which is implemented in Java for easy portability to various platforms. By using the svcAuth library, all users with anytime-anywhere demand for multimedia streams can always ensure that the content they watch is original and has not gone under any malicious manipulation.&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project its software is continuously improved. The latest version of the svcAuth software can be accessed through our subversion system at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion server]. It is released under [http://www.gnu.org/licenses/gpl-3.0.txt GPLv3].&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2732</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2732"/>
		<updated>2009-06-10T01:42:13Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have designed and implemented an authentication schemes for H.264/SVC streams, called ''svcAuth''. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player. Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent. Our current implementation of these modules is available as an open-source library, which is implemented in Java for easy portability to various platforms. By using the svcAuth library, all users with anytime-anywhere demand for multimedia streams can always ensure that the content they watch is original and has not gone under any malicious manipulation.&lt;br /&gt;
&lt;br /&gt;
svcAuth is an ongoing project its software is continuously improved. The latest version of the svcAuth software can be accessed through our subversion system at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/svcAuth our subversion server].&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2731</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2731"/>
		<updated>2009-06-10T01:03:46Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have designed and implemented an authentication schemes for H.264/SVC streams, called svcAuth. svcAuth supports the full flexibility of SVC and allows verification of all possible substreams. In addition, it is designed for end-to-end authentication, in which only the content provider and the receiving devices need to be aware of the authentication mechanism. Therefore, when distributing multimedia streams in large scale over third-party Content Distribution Networks (CDNs), which contain proxies that may adapt scalable streams for different users, the proxies do not need to understand the authentication scheme, i.e., the authentication process and the authentication information embedded in streams are transparent; these information are embedded in SVC streams in a format-compliant manner.&lt;br /&gt;
&lt;br /&gt;
svcAuth can be employed by any multimedia streaming application as a software/hardware add-on and without requiring any change to the encoders/decoders. Specifically, we add one authentication module to the provider side, which performs a post-processing on the encoded stream and embeds in it the information required for verification. At the receivers, we add a verification module, which verifies the received stream using the information embedded in it, and passes the verified stream to the player. Note that receivers who do not have the verification module and do not support the svcAuth authentication scheme can still receiver and decode the streams, since the scheme is transparent. Our current implementation of these modules is available as an open-source library called {\em svcAuth}, which is implemented in Java for easy portability to various platforms. By using the svcAuth library, all users with anytime-anywhere demand for multimedia streams can always ensure that the content they watch is original and has not gone under any malicious manipulation.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2730</id>
		<title>svcAuth</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=svcAuth&amp;diff=2730"/>
		<updated>2009-06-10T00:41:54Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: New page: hello world!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;hello world!&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2729</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2729"/>
		<updated>2009-06-10T00:41:24Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
A number of major challenges rise when considering authentication of scalable video streams. First, digital signature operations, which are the foundation of authentication processes, are too computationally expensive to be performed frequently in real-time, especially by limited-capability devices such as cell phones and PDAs. Second, flexibility of scalable videos needs to be supported by the authentication scheme. A scalable video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. Third, packet losses frequently take place in transmission networks, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted. Due to the dependency that the authentication mechanism may impose on video packets, it may amplify the loss ratio of the network; a video packet and its authentication information must both be successfully received, or the video packet is unusable even though it is not lost. The authentication scheme should have zero or negligible such effect. Fourth, some additional information needs to be attached to the stream by the authentication mechanism in order for receivers to be able to verify the stream. The amount of this information needs to be carefully controlled, since bandwidth is a limited resource and should not be non-negligibly occupied by the authentication information.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above four challenges and several other subtle issues for authentication of scalable video streams in a computationally efficient manner, with low delay and communication overhead and high resilience to packet losses. We also performed systematic tamperings with scalable videos with very limited manipulation possibilities in order to highlight the importance of our approaches. Our main focus is on recent scalable video structures, such as the state-of-the-art H.264/SVC standard, which follow a more complex structure compared to traditional simple scalable videos, and can flexibly support different types of scalability by using various coding tools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications ==&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, “Authentication Schemes for Multimedia Streams: Quantitative Analysis and Comparison,” To appear in ACM Transactions on Multimedia Computing, Communications and Applications.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, “Security of Scalable Multimedia Streams,” Handbook on Security and Networks, World Scientific Publishing Co., To appear in summer 2009.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''[[svcAuth]]''': An Authentication Scheme for Securing the Delivery of H.264/SVC Video Streams&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2633</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2633"/>
		<updated>2009-03-11T20:04:17Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
A number of major challenges rise when considering authentication of scalable video streams. First, digital signature operations, which are the foundation of authentication processes, are too computationally expensive to be performed frequently in real-time, especially by limited-capability devices such as cell phones and PDAs. Second, flexibility of scalable videos needs to be supported by the authentication scheme. A scalable video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. Third, packet losses frequently take place in transmission networks, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted. Due to the dependency that the authentication mechanism may impose on video packets, it may amplify the loss ratio of the network; a video packet and its authentication information must both be successfully received, or the video packet is unusable even though it is not lost. The authentication scheme should have zero or negligible such effect. Fourth, some additional information needs to be attached to the stream by the authentication mechanism in order for receivers to be able to verify the stream. The amount of this information needs to be carefully controlled, since bandwidth is a limited resource and should not be non-negligibly occupied by the authentication information.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above four challenges and several other subtle issues for authentication of scalable video streams in a computationally efficient manner, with low delay and communication overhead and high resilience to packet losses. We also performed systematic tamperings with scalable videos with very limited manipulation possibilities in order to highlight the importance of our approaches. Our main focus is on recent scalable video structures, such as the state-of-the-art H.264/SVC standard, which follow a more complex structure compared to traditional simple scalable videos, and can flexibly support different types of scalability by using various coding tools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications ==&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, “Authentication Schemes for Multimedia Streams: Quantitative Analysis and Comparison,” To appear in ACM Transactions on Multimedia Computing, Communications and Applications.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and K. Mokhtarian, “Security of Scalable Multimedia Streams,” Handbook on Security and Networks, World Scientific Publishing Co., To appear in summer 2009.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Writing_Guidelines&amp;diff=2600</id>
		<title>Writing Guidelines</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Writing_Guidelines&amp;diff=2600"/>
		<updated>2009-03-09T22:08:42Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Notes on Latex ==&lt;br /&gt;
&lt;br /&gt;
It is required by IEEE, ACM and many other publishers that all fonts are embedded in the document.  Embedding fonts is important in order to make the document look and print in the same way the authors intended on different systems. &lt;br /&gt;
&lt;br /&gt;
* Easiest Way (if you are using TeXnicCenter):  Download [http://nsl.cs.sfu.ca/resources/latexPDF_profile.tco this Output Profile] and import it in your TeXnicCenter system (Build --&amp;gt; Define Output Profiles --&amp;gt; Import ...).  Always compile your document with that profile.&lt;br /&gt;
* Or you can use the following argument line for ps2pdf post processor (in the profile editor of TeXnicCenter)&lt;br /&gt;
 -sDEVICE=pdfwrite -q -dBATCH -dNOPAUSE -dSAFER -dPDFX -dPDFSETTINGS=/prepress&lt;br /&gt;
 -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode&lt;br /&gt;
 -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode -sOutputFile=&amp;quot;%bm.pdf&amp;quot;&lt;br /&gt;
 -c save pop -f &amp;quot;%bm.ps&amp;quot;      &lt;br /&gt;
* You can use the following script with Ghostscript (original source of the script is [http://www.daniel-lemire.com/blog/archives/2005/08/29/getting-pdflatex-to-embed-all-fonts/ here].) Of course change the &amp;quot;outFile.pdf&amp;quot; and &amp;quot;inputFile.pdf&amp;quot; to your files.&lt;br /&gt;
 gs -sDEVICE=pdfwrite -q -dBATCH -dNOPAUSE -dSAFER \&lt;br /&gt;
 -dPDFX \&lt;br /&gt;
 -dPDFSETTINGS=/prepress \&lt;br /&gt;
 -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode \&lt;br /&gt;
 -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode \&lt;br /&gt;
 -sOutputFile=outFile.pdf \&lt;br /&gt;
 -c `&amp;gt; setdistillerparams` \&lt;br /&gt;
 -f inputFile.pdf \&lt;br /&gt;
 -c quit &lt;br /&gt;
&lt;br /&gt;
== Notes on xfig ==&lt;br /&gt;
&lt;br /&gt;
* If we use 'Pattern' to fill an object, the resulting  eps fil will have Type 3 fonts, which cause problems in the PDF file and will not be accepted by many publishers.  Use 'Filled' with 100% instead.&lt;br /&gt;
* Alternatively, a patch for xfig/transfig can be found [http://konstantin.shemyak.com/sw/ here]. With a patched xfig, we can generate Type 3 font free eps files with patterns. It is, however, reported that these eps files do not look good in ghostscript PS readers. &lt;br /&gt;
* There are a couple of other ways to typeset mathematical formulas in figures generated by xfig. We present one straightforward approach. Please read [http://web.engr.oregonstate.edu/~kaj/ltxtips.html#Figures the original page] , where steps 1, 2, 3, 5, 6 are mandatory. Others are optional. We summarize it as follows.&lt;br /&gt;
&lt;br /&gt;
# First you're in Xfig. Write equations or characters in the schematic. You need $  in both sides, eg, $\phi$.&lt;br /&gt;
# Press the &amp;quot;Edit&amp;quot; icon and turn on the Special flag.&lt;br /&gt;
# Export the figure into Combined Postscript/LaTeX instead of the usual Encapsulated postscript (EPS) .&lt;br /&gt;
# Add \usepackage{epsfig} to your preamble.&lt;br /&gt;
# You might need to add \usepackage{color} to your preamable&lt;br /&gt;
# When storing fig files in a different from tex files, you need to manually modify pstex_t files whenever you export them. E.g.. I have to change \epsfig{file=coverage.pstex}% into \epsfig{file=../fig/coverage.pstex}%.&lt;br /&gt;
# Write the following command to insert your figure in TeX file:&lt;br /&gt;
 \begin{figure}[ht]&lt;br /&gt;
   \centering{&lt;br /&gt;
     \resizebox{.48\textwidth}{!} {&lt;br /&gt;
       \input{../fig/coverage.pstex_t}&lt;br /&gt;
     }&lt;br /&gt;
   }&lt;br /&gt;
   \caption{Test.} \label{fig:test}&lt;br /&gt;
 \end{figure}&lt;br /&gt;
&lt;br /&gt;
== Notes on Matlab ==&lt;br /&gt;
&lt;br /&gt;
* Creating EPS figures in Matlab (you may use the standard setting with font 20 instead):&lt;br /&gt;
: First, open the figure you would like to export and set line properties and fonts as follows:&lt;br /&gt;
 FontSize = 16.0&lt;br /&gt;
 FontName = Times New Roman&lt;br /&gt;
 LineWidth = 2.0&lt;br /&gt;
 MarkerSize = 4.0&lt;br /&gt;
: Next, run the attached script (&amp;quot;fix_export.m&amp;quot;) to fix the figure size. Note that you need to provide a parameter called 'x' to the function 'fix_export'. This parameter determines the horizontal margin on the left of y-axis. A typical value for parameter 'x' is 0.75 (inches) and the maximum value is 1.5 (inches).&lt;br /&gt;
 % fix_export.m&lt;br /&gt;
 function fix_export(x)&lt;br /&gt;
 set(gcf, 'WindowStyle', 'normal');&lt;br /&gt;
 set(gca, 'Unit', 'inches');&lt;br /&gt;
 set(gca, 'Position', [x 0.6 4.5 3.375]);&lt;br /&gt;
 set(gcf, 'Unit', 'inches');&lt;br /&gt;
 set(gcf, 'Position', [0.25 2.5 6.0 4.075]);&lt;br /&gt;
: After running the script, you can export the figure using &amp;quot;Save As ...&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* A script (based on fix_export.m) that sets labels and legend into Latex mode, and fix the figure size.&lt;br /&gt;
 set(gca,'FontSize',16)&lt;br /&gt;
 set(gca, 'FontName', 'Times New Roman');&lt;br /&gt;
 set(get(gca, 'xlabel'), 'interpreter', 'latex');&lt;br /&gt;
 set(get(gca, 'xlabel'), 'FontName', 'Times New Roman');&lt;br /&gt;
 set(get(gca, 'xlabel'), 'FontSize', 16);&lt;br /&gt;
 set(get(gca, 'ylabel'), 'interpreter', 'latex');&lt;br /&gt;
 set(get(gca, 'ylabel'), 'FontName', 'Times New Roman');&lt;br /&gt;
 set(get(gca, 'ylabel'), 'FontSize', 16);&lt;br /&gt;
 set(legend(), 'interpreter', 'latex');&lt;br /&gt;
 set(legend(), 'FontName', 'Times New Roman');&lt;br /&gt;
 set(legend(), 'FontSize', 16);&lt;br /&gt;
 set(gcf, 'WindowStyle', 'normal');&lt;br /&gt;
 set(gca, 'Unit', 'inches');&lt;br /&gt;
 set(gca, 'Position', [.6 .6 4.6 3.125]);&lt;br /&gt;
 set(gcf, 'Unit', 'inches');&lt;br /&gt;
 set(gcf, 'Position', [0.25 2.5 5.5 4.05]);&lt;br /&gt;
&lt;br /&gt;
* To extract data points from an existing Matlab fig file:&lt;br /&gt;
 hl = findobj(gca,'type','line');&lt;br /&gt;
 xx = get(hl,'xdata');&lt;br /&gt;
 yy = get(hl,'ydata');&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Notes on Exporting Microsoft Visio Drawings to EPS ==&lt;br /&gt;
&lt;br /&gt;
To convert Visio-drawn figures to EPS files preserving their vector format, the following options are suggested. The first one, which is more recommended, is to print the Visio image into a PDF file using a PDF printer such as [http://sourceforge.net/projects/pdfcreator/ PDFCreator], and convert the PDF file to EPS. The use of [http://www.imagemagick.org/download/binaries/ ImageMagick] for converting PDF to EPS is recommended. The second option is to save the Visio image as a WMF or EMF (Windows-specific vector image formats), and then convert it to EPS using a software called [http://www.projectory.de/emftoeps/index.html EMF2EPS]. The software requires a PostScript Printer Driver, such as [http://www.adobe.com/support/downloads/product.jsp?product=pdrv&amp;amp;platform=win Adobe's]. Note that printing the Visio file directly to EPS using this printer does not preserve the vector format of the images. All of the softwares mentioned are free.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Writing_Guidelines&amp;diff=2599</id>
		<title>Writing Guidelines</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Writing_Guidelines&amp;diff=2599"/>
		<updated>2009-03-09T22:05:36Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Notes on Latex ==&lt;br /&gt;
&lt;br /&gt;
It is required by IEEE, ACM and many other publishers that all fonts are embedded in the document.  Embedding fonts is important in order to make the document look and print in the same way the authors intended on different systems. &lt;br /&gt;
&lt;br /&gt;
* Easiest Way (if you are using TeXnicCenter):  Download [http://nsl.cs.sfu.ca/resources/latexPDF_profile.tco this Output Profile] and import it in your TeXnicCenter system (Build --&amp;gt; Define Output Profiles --&amp;gt; Import ...).  Always compile your document with that profile.&lt;br /&gt;
* Or you can use the following argument line for ps2pdf post processor (in the profile editor of TeXnicCenter)&lt;br /&gt;
 -sDEVICE=pdfwrite -q -dBATCH -dNOPAUSE -dSAFER -dPDFX -dPDFSETTINGS=/prepress&lt;br /&gt;
 -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode&lt;br /&gt;
 -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode -sOutputFile=&amp;quot;%bm.pdf&amp;quot;&lt;br /&gt;
 -c save pop -f &amp;quot;%bm.ps&amp;quot;      &lt;br /&gt;
* You can use the following script with Ghostscript (original source of the script is [http://www.daniel-lemire.com/blog/archives/2005/08/29/getting-pdflatex-to-embed-all-fonts/ here].) Of course change the &amp;quot;outFile.pdf&amp;quot; and &amp;quot;inputFile.pdf&amp;quot; to your files.&lt;br /&gt;
 gs -sDEVICE=pdfwrite -q -dBATCH -dNOPAUSE -dSAFER \&lt;br /&gt;
 -dPDFX \&lt;br /&gt;
 -dPDFSETTINGS=/prepress \&lt;br /&gt;
 -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode \&lt;br /&gt;
 -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode \&lt;br /&gt;
 -sOutputFile=outFile.pdf \&lt;br /&gt;
 -c `&amp;gt; setdistillerparams` \&lt;br /&gt;
 -f inputFile.pdf \&lt;br /&gt;
 -c quit &lt;br /&gt;
&lt;br /&gt;
== Notes on xfig ==&lt;br /&gt;
&lt;br /&gt;
* If we use 'Pattern' to fill an object, the resulting  eps fil will have Type 3 fonts, which cause problems in the PDF file and will not be accepted by many publishers.  Use 'Filled' with 100% instead.&lt;br /&gt;
* Alternatively, a patch for xfig/transfig can be found [http://konstantin.shemyak.com/sw/ here]. With a patched xfig, we can generate Type 3 font free eps files with patterns. It is, however, reported that these eps files do not look good in ghostscript PS readers. &lt;br /&gt;
* There are a couple of other ways to typeset mathematical formulas in figures generated by xfig. We present one straightforward approach. Please read [http://web.engr.oregonstate.edu/~kaj/ltxtips.html#Figures the original page] , where steps 1, 2, 3, 5, 6 are mandatory. Others are optional. We summarize it as follows.&lt;br /&gt;
&lt;br /&gt;
# First you're in Xfig. Write equations or characters in the schematic. You need $  in both sides, eg, $\phi$.&lt;br /&gt;
# Press the &amp;quot;Edit&amp;quot; icon and turn on the Special flag.&lt;br /&gt;
# Export the figure into Combined Postscript/LaTeX instead of the usual Encapsulated postscript (EPS) .&lt;br /&gt;
# Add \usepackage{epsfig} to your preamble.&lt;br /&gt;
# You might need to add \usepackage{color} to your preamable&lt;br /&gt;
# When storing fig files in a different from tex files, you need to manually modify pstex_t files whenever you export them. E.g.. I have to change \epsfig{file=coverage.pstex}% into \epsfig{file=../fig/coverage.pstex}%.&lt;br /&gt;
# Write the following command to insert your figure in TeX file:&lt;br /&gt;
 \begin{figure}[ht]&lt;br /&gt;
   \centering{&lt;br /&gt;
     \resizebox{.48\textwidth}{!} {&lt;br /&gt;
       \input{../fig/coverage.pstex_t}&lt;br /&gt;
     }&lt;br /&gt;
   }&lt;br /&gt;
   \caption{Test.} \label{fig:test}&lt;br /&gt;
 \end{figure}&lt;br /&gt;
&lt;br /&gt;
== Notes on Matlab ==&lt;br /&gt;
&lt;br /&gt;
* Creating EPS figures in Matlab (you may use the standard setting with font 20 instead):&lt;br /&gt;
: First, open the figure you would like to export and set line properties and fonts as follows:&lt;br /&gt;
 FontSize = 16.0&lt;br /&gt;
 FontName = Times New Roman&lt;br /&gt;
 LineWidth = 2.0&lt;br /&gt;
 MarkerSize = 4.0&lt;br /&gt;
: Next, run the attached script (&amp;quot;fix_export.m&amp;quot;) to fix the figure size. Note that you need to provide a parameter called 'x' to the function 'fix_export'. This parameter determines the horizontal margin on the left of y-axis. A typical value for parameter 'x' is 0.75 (inches) and the maximum value is 1.5 (inches).&lt;br /&gt;
 % fix_export.m&lt;br /&gt;
 function fix_export(x)&lt;br /&gt;
 set(gcf, 'WindowStyle', 'normal');&lt;br /&gt;
 set(gca, 'Unit', 'inches');&lt;br /&gt;
 set(gca, 'Position', [x 0.6 4.5 3.375]);&lt;br /&gt;
 set(gcf, 'Unit', 'inches');&lt;br /&gt;
 set(gcf, 'Position', [0.25 2.5 6.0 4.075]);&lt;br /&gt;
: After running the script, you can export the figure using &amp;quot;Save As ...&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* A script (based on fix_export.m) that sets labels and legend into Latex mode, and fix the figure size.&lt;br /&gt;
 set(gca,'FontSize',16)&lt;br /&gt;
 set(gca, 'FontName', 'Times New Roman');&lt;br /&gt;
 set(get(gca, 'xlabel'), 'interpreter', 'latex');&lt;br /&gt;
 set(get(gca, 'xlabel'), 'FontName', 'Times New Roman');&lt;br /&gt;
 set(get(gca, 'xlabel'), 'FontSize', 16);&lt;br /&gt;
 set(get(gca, 'ylabel'), 'interpreter', 'latex');&lt;br /&gt;
 set(get(gca, 'ylabel'), 'FontName', 'Times New Roman');&lt;br /&gt;
 set(get(gca, 'ylabel'), 'FontSize', 16);&lt;br /&gt;
 set(legend(), 'interpreter', 'latex');&lt;br /&gt;
 set(legend(), 'FontName', 'Times New Roman');&lt;br /&gt;
 set(legend(), 'FontSize', 16);&lt;br /&gt;
 set(gcf, 'WindowStyle', 'normal');&lt;br /&gt;
 set(gca, 'Unit', 'inches');&lt;br /&gt;
 set(gca, 'Position', [.6 .6 4.6 3.125]);&lt;br /&gt;
 set(gcf, 'Unit', 'inches');&lt;br /&gt;
 set(gcf, 'Position', [0.25 2.5 5.5 4.05]);&lt;br /&gt;
&lt;br /&gt;
* To extract data points from an existing Matlab fig file:&lt;br /&gt;
 hl = findobj(gca,'type','line');&lt;br /&gt;
 xx = get(hl,'xdata');&lt;br /&gt;
 yy = get(hl,'ydata');&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Notes on Exporting Microsoft Visio Drawings to EPS ==&lt;br /&gt;
&lt;br /&gt;
To convert Visio-drawn figures to EPS files preserving their vector format, the following options are suggested. The first one, which is more recommended, is to print the Visio image into a PDF file using a PDF printer such as [http://sourceforge.net/projects/pdfcreator/ PDFCreator], and convert the PDF file to EPS. The use of [http://www.imagemagick.org/download/binaries/ ImageMagick] for converting PDF to EPS is recommended. The second option is to save the Visio image as a WMF or EMF (Windows-specific vector image formats), and then convert it to EPS using a software called [http://www.projectory.de/emftoeps/index.html EMF2EPS]. The software requires a PostScript Printer Driver, such as [http://www.adobe.com/support/downloads/product.jsp?product=pdrv&amp;amp;platform=win Adobe's]. All of the softwares mentioned are free.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2372</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2372"/>
		<updated>2008-09-30T20:26:54Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
A number of major challenges rise when considering authentication of scalable video streams. First, digital signature operations, which are the foundation of authentication processes, are too computationally expensive to be performed frequently in real-time, especially by limited-capability devices such as cell phones and PDAs. Second, flexibility of scalable videos needs to be supported by the authentication scheme. A scalable video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. Third, packet losses frequently take place in transmission networks, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted. Due to the dependency that the authentication mechanism may impose on video packets, it may amplify the loss ratio of the network; a video packet and its authentication information must both be successfully received, or the video packet is unusable even though it is not lost. The authentication scheme should have zero or negligible such effect.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above three challenges and several other subtle issues for authentication of scalable video streams in a computationally efficient manner, with low communication overhead and high resilience to packet losses, and without limiting the flexibility of the stream. We also performed systematic tamperings with scalable videos to justify our approaches. Our main focus is on the recent and well favored scalable video standard H.264/SVC, which was finalized last year (2007) and has shown promising improvements over previous scalable coding techniques.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
* Sample codes and tampering results will be released soon.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2314</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2314"/>
		<updated>2008-08-27T19:28:36Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
The typical approach for authentication of messages is the use of digital signatures. Accordingly, a naive solution for authenticating a stream may be to sign every packet. This clearly does not work in practice due to its high computational cost that is not affordable especially by receiver devices with limited capabilities. In addition to the real-time nature of the streams, a major challenge for authentication of scalable video streams is their flexibility. The video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. The third important issue is tolerating losses that frequently take place in transmissions, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted due to the dependency the authentication scheme imposes on video packets, i.e., a video packet and its authentication information must both be successfully received, or the video packet is unusable.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above three challenges and several other subtle issues for authentication of scalable video streams in an efficient manner, with low communication overhead, and without limiting the flexibility of the stream. We also performed systematic tamperings with scalable videos to justify our approaches. Our main focus is on the recent and well favored scalable video structure H.264/SVC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
* Sample codes and tampering results will be released soon.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2313</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2313"/>
		<updated>2008-08-27T19:25:45Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
The typical approach for authentication of messages is the use of digital signatures. Accordingly, a naive solution for authenticating a stream may be to sign every packet. This clearly does not work in practice due to its high computational cost that is not affordable especially by receiver devices with limited capabilities. In addition to the real-time nature of the streams, the next major challenge for authentication of scalable video streams is their flexibility. The video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. Furthermore, the third important issue is tolerating losses that frequently take place in transmissions, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted due to the dependency the authentication scheme imposes on video packets, i.e., a video packet and its authentication information must both be successfully received, or the video packet is unusable.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above three challenges and several other subtle issues for authentication of scalable video streams in an efficient manner, with low communication overhead, and without limiting the flexibility of the stream. We also performed systematic tamperings with scalable videos to justify our approaches. Our main focus is on the recent and well favored scalable video structure H.264/SVC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Code ==&lt;br /&gt;
&lt;br /&gt;
* Will be released soon.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2312</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2312"/>
		<updated>2008-08-27T19:23:24Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
The typical approach for authentication of messages is the use of digital signatures. Accordingly, a naive solution for authenticating a stream may be to sign every packet. This clearly does not work in practice due to its high computational cost that is not affordable especially by receiver devices with limited capabilities. In addition to the real-time nature of the streams, the next major challenge for authentication of scalable video streams is their flexibility. The video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. Furthermore, the third important issue is tolerating losses that frequently take place in transmissions, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted due to the dependency the authentication scheme imposes on video packets, i.e., a video packet and its authentication information must both be successfully received, or the video packet is unusable.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above three challenges and several other subtle issues for authentication of scalable video streams in an efficient manner, with low communication overhead, and without limiting the flexibility of the stream. We also performed systematic tamperings with scalable videos to justify our approaches. Our main focus is on the recent and well favored scalable video structure H.264/SVC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Code ==&lt;br /&gt;
&lt;br /&gt;
Will be released soon.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2311</id>
		<title>Security of Scalable Multimedia Streams</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Security_of_Scalable_Multimedia_Streams&amp;diff=2311"/>
		<updated>2008-08-27T19:20:40Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The demand for multimedia services has been rapidly increasing over the past few years. More and more users rely on multimedia services for many aspects of their daily lives, including work, education, and entertainment. This makes the security of delivering multimedia content of great importance. Therefore, we focus on providing source authentication and data integrity services for media stream, i.e., ensuring that streams being played by receivers are original and have not been tampered with by malicious attackers. Our especial focus is on scalable video streams, which are becoming very popular with respect to recent advances in scalable coding and the increasing heterogeneity among receiver devices.&lt;br /&gt;
&lt;br /&gt;
The typical approach for authentication of messages is the use of digital signatures. Accordingly, a naive solution for authenticating a stream may be to sign every packet. This clearly does not work in practice due to its high computational cost that is not affordable especially by receiver devices with limited capabilities. In addition to the real-time nature of the streams, the next major challenge for authentication of scalable video streams is their flexibility. The video is encoded and signed once, and there can be many valid substreams extractable from one bitstream, each of which needs to be authenticated. Furthermore, the third important issue is tolerating losses that frequently take place in transmissions, especially in wireless channels. Counteracting the impact of loss in video transmission scenarios is approached by several techniques such as Forward Error Correction (FEC), interleaved packetization, etc. This impact in case of &amp;quot;authenticated&amp;quot; video gets more highlighted due to the dependency the authentication scheme imposes on video packets, i.e., a video packet and its authentication information must both be successfully received, or the video packet is unusable.&lt;br /&gt;
&lt;br /&gt;
We are investigating the above three challenges and several other subtle issues for authentication of scalable video streams in an efficient manner, with low communication overhead, and without limiting the flexibility of the stream. We also performed systematic tamperings with scalable videos to justify our approaches. Our main focus is on the recent and well favored scalable video structure H.264/SVC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] (Assistant Professor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~kma26/personal/ Kianoosh Mokhtarian] (MSc Student)&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Modeling_and_Caching_of_P2P_Traffic&amp;diff=2203</id>
		<title>Modeling and Caching of P2P Traffic</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Modeling_and_Caching_of_P2P_Traffic&amp;diff=2203"/>
		<updated>2008-07-07T20:22:22Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. The sheer volume and expected high growth of P2P traffic have negative consequences, including: (i) significantly increased load on the Internet backbone, hence, higher chances of congestion; and (ii) increased cost on Internet Service Providers (ISPs), hence, higher service charges for all Internet users. &lt;br /&gt;
&lt;br /&gt;
A potential solution for alleviating those negative impacts is to cache a fraction of the P2P traffic such that future requests for the same objects could be served from a cache in the requester’s autonomous system (AS). Caching in the Internet has mainly been considered for web and video streaming traffic, with little attention to the P2P traffic. Many caching algorithms for web traffic and for video streaming systems have been proposed and analyzed. Directly applying such algorithms to cache P2P traffic may not yield the best cache performance, because of the different traffic characteristics and caching objectives. For instance, reducing user-perceived access latency is a key objective for&lt;br /&gt;
web caches. Consequently, web caching algorithms often incorporate information about the cost (latency) of a cache miss when deciding which object to cache/evict. Although latency is important to P2P users, the goal of a P2P cache is often focused on the ISP’s primary concern; namely, the amount of bandwidth consumed by large P2P transfers. Consequently, the byte hit rate, i.e., the number of bytes served from the cache to the total number of transfered bytes, is more important than&lt;br /&gt;
latency. &lt;br /&gt;
&lt;br /&gt;
We are developing caching algorithms that capitalize on the P2P traffic characteristics. We are also exploring the potential of cooperative caching of P2P traffic, where multiple caches deployed in different  ASes (which could have a peering relatioship) or within a large AS (e.g., a Tier-1 ISP) cooperate to serve traffic from each other`s clients. Cooperation reduces the load on expensive inter-ISP links. Furthermore, we are implementing all of our algorithms and ideas in a prototype caching system.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== People ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.sfu.ca/~mhefeeda/ Mohamed Hefeeda] &lt;br /&gt;
&lt;br /&gt;
* Cheng-Hsin Hsu (PhD Student)&lt;br /&gt;
&lt;br /&gt;
* Kianoosh Mokhtarian (MSc Student)&lt;br /&gt;
&lt;br /&gt;
* Behrooz Noorizadeh (MSc Student, Graduated Fall 2007)&lt;br /&gt;
&lt;br /&gt;
* Osama Saleh (MSc Student, Graduated Fall 2006)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Publications == &lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and O. Saleh, Traffic Modeling and Proportional Partial Caching for Peer-to-Peer Systems, IEEE/ACM Transactions on Networking, Accepted October 2007.&lt;br /&gt;
&lt;br /&gt;
* M. Hefeeda and B. Noorizadeh, Cooperative Caching: The Case for P2P Traffic, In Proc. of IEEE Conference on Local Computer Networks (LCN'08), Montreal, Canada, October 2008.&lt;br /&gt;
&lt;br /&gt;
* O. Saleh and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/icnp06.pdf Modeling and Caching of Peer-to-Peer Traffic] , In Proc. of IEEE International Conference on Network Protocols (ICNP'06), pp. 249--258, Santa Barbara, CA, November 2006.   (Acceptance: 14%)  &lt;br /&gt;
&lt;br /&gt;
== pCache Software ==&lt;br /&gt;
&lt;br /&gt;
=== Overview ===&lt;br /&gt;
We have designed and implemented a proxy cache for P2P traffic, which we call&lt;br /&gt;
pCache. pCache is to be deployed by autonomous systems (ASes) or ISPs that&lt;br /&gt;
are interested in reducing the burden of P2P traffic. pCache would be deployed&lt;br /&gt;
at or near the gateway router of an AS.  At a high-level, a client&lt;br /&gt;
participating in a particular P2P network issues a request to download an&lt;br /&gt;
object. This request is intercepted by pCache. If the requested object or parts&lt;br /&gt;
of it are stored in the cache, they are served to the requesting client. This&lt;br /&gt;
saves bandwidth on the external (expensive) links to the Internet.  If a part&lt;br /&gt;
of the requested object is not found in the cache, the request is forwarded to&lt;br /&gt;
the P2P network. When the response comes back, pCache may store a copy of the&lt;br /&gt;
object for future requests from other clients in its AS. Clients inside the AS&lt;br /&gt;
as well as external clients are not aware of pCache, i.e., pCache is&lt;br /&gt;
fully-transparent in both directions.&lt;br /&gt;
&lt;br /&gt;
Our C++ implementation of pCache has more than 11,000 lines of code. We have&lt;br /&gt;
rigorously validated and evaluated the performance of pCache as well as its&lt;br /&gt;
impacts on ISPs and clients.  Our experimental results show that pCache&lt;br /&gt;
benefits both the clients and the ISP in which the cache is deployed, without&lt;br /&gt;
hurting the performance of the P2P networks.  Specifically, clients behind the&lt;br /&gt;
cache achieve much higher download speeds than other clients running in the&lt;br /&gt;
same conditions without the cache.  In addition, a significant portion of the&lt;br /&gt;
traffic is served from the cache, which reduces the load on the expensive WAN&lt;br /&gt;
links for the ISP.  Our results also show that the cache does not reduce the&lt;br /&gt;
connectivity of clients behind it, nor does it reduce their upload speeds.&lt;br /&gt;
This is important for the whole P2P network, because reduced connectivity could&lt;br /&gt;
lead to decreased availability of peers and the content stored on them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Detailed Design === &lt;br /&gt;
&lt;br /&gt;
==== Transparent Proxy and P2P Traffic Identifier ====&lt;br /&gt;
These two components reside on the gateway router. They transparently inspect&lt;br /&gt;
traffic going through the router and forward only P2P connections to pCache.&lt;br /&gt;
Traffic that does not belong to any P2P system is processed by the router in&lt;br /&gt;
the regular way and is not affected by the presence of pCache. This is done&lt;br /&gt;
using the [http://www.netfilter.org/ Netfilter] framework for custom packet handling. &lt;br /&gt;
&lt;br /&gt;
[http://www.netfilter.org/ Netfilter] defines hook points at various packet processing stages, such as&lt;br /&gt;
PREROUTING, LOCAL_INPUT, LOCAL_OUT, FORWARD, and POSTROUTING. [http://www.netfilter.org/ Netfilter] allows&lt;br /&gt;
us to register callback functions at any of these hook points to be invoked&lt;br /&gt;
when packets reach those hook points. [http://www.netfilter.org/ Netfilter] is commonly used with [http://www.netfilter.org/projects/iptables/index.html iptables],&lt;br /&gt;
which provides an interface to define rulesets to be applied on packets. Each&lt;br /&gt;
ruleset has a number of classifiers (fields to be matched) and an action. To&lt;br /&gt;
support transparent web proxy, a callback function is registered at the&lt;br /&gt;
PREROUTING hook point to intercept packets with the destination port number set&lt;br /&gt;
to 80 on TCP. Once intercepted, the destination IP address and port number of&lt;br /&gt;
each packet will be changed to those of the process running the proxy cache.&lt;br /&gt;
Thus, HTTP packets will be redirected to the web proxy cache for further&lt;br /&gt;
processing. Although the destination IP address is lost during this&lt;br /&gt;
redirection, the web proxy cache can know the address of the web server because&lt;br /&gt;
HTTP 1.1 requests include the server location in the header. This simple&lt;br /&gt;
redirection, however, does not work for proxy caches for P2P traffic, because&lt;br /&gt;
the address of the remote peer is not included in the request messages, and the&lt;br /&gt;
proxy server cannot find the remote peer. Hence, packets need to be redirected&lt;br /&gt;
to the proxy process without changing their destination IP and port numbers.&lt;br /&gt;
We notice that [http://www.netfilter.org/ Netfilter] supports very flexible, complicated, forwarding&lt;br /&gt;
rulesets. This allows us to run the gateway router and pCache as two processes&lt;br /&gt;
on the same machines, or to run them on two separate machines.&lt;br /&gt;
&lt;br /&gt;
We implement our transparent proxy based on the [http://www.balabit.com/support/community/products/tproxy/ tproxy] project. In&lt;br /&gt;
implementation, the proxy process creates a listening socket. A callback&lt;br /&gt;
function is registered at the PREROUTING hook point to intercept packets that&lt;br /&gt;
might be of interest to the proxy process. This function sets a pointer to the&lt;br /&gt;
listening socket in the structure containing the packet itself. It also sets a&lt;br /&gt;
flag in the packet. The route lookup procedure is modified to check the flag&lt;br /&gt;
bit. If it is set, the packet is sent to the local IP stack, even though its&lt;br /&gt;
destination is an external IP address. Using the pointer in the packet&lt;br /&gt;
structure, the packet is then redirected to the listening socket of the proxy.&lt;br /&gt;
A new (connected) socket is created between the proxy and the internal host.&lt;br /&gt;
This new socket uses the IP address and port number of the external host, not&lt;br /&gt;
of the proxy process. Another socket is created between the proxy and the&lt;br /&gt;
external host; it uses the IP address and port number of the internal host. Two&lt;br /&gt;
new entries are added to the socket table for these two sockets. Traffic&lt;br /&gt;
packets passing through the gateway router are checked at the PREROUTING hook&lt;br /&gt;
point to see whether they match any of these sockets. &lt;br /&gt;
&lt;br /&gt;
The P2P Traffic Identifier determines whether a connection belongs to any P2P&lt;br /&gt;
system known to the cache. This is done by comparing a number of bytes from the&lt;br /&gt;
connection stream against known P2P application signatures. We have implemented&lt;br /&gt;
identifiers for BitTorrent and Gnutella, which are the most common P2P systems&lt;br /&gt;
nowadays. We can readily support traffic identification for other protocols by&lt;br /&gt;
adding new identifiers to the cache. &lt;br /&gt;
&lt;br /&gt;
Since P2P systems use dynamic ports, the proxy process may initially intercept&lt;br /&gt;
some connections that do not belong to P2P systems. This can only be discovered&lt;br /&gt;
after inspecting a few packets using the P2P Traffic Identification module.&lt;br /&gt;
Each intercepted connection is split into a pair of connections, and all&lt;br /&gt;
packets have to go through the proxy process. This imposes overhead on the&lt;br /&gt;
proxy cache and may increase the end-to-end delay of the connections. To reduce&lt;br /&gt;
this overhead, we splice each pair of non-P2P connections using TCP splicing&lt;br /&gt;
techniques, which are usually used in layer-7 switching. We modify our&lt;br /&gt;
[http://www.netfilter.org/ Netfilter] callback function to support TCP splicing as well. Our implementation&lt;br /&gt;
is similar to an inactive layer-7 switching project, called [http://www.linux-l7sw.org/ l7switch]. For&lt;br /&gt;
spliced connections, the sockets in the proxy process are closed and packets&lt;br /&gt;
are relayed in the kernel stack instead of passing them up to the proxy process&lt;br /&gt;
in the application layer. Implementation details such as adjusting sequence&lt;br /&gt;
numbers in the spliced TCP connections had to be addressed, because these two&lt;br /&gt;
TCP connections start from different initial sequence numbers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Connection Manager ====&lt;br /&gt;
When a connection is identified as belonging to a P2P system, it is passed to&lt;br /&gt;
the Connection Manager, which coordinates different components of pCache to&lt;br /&gt;
store and serve requests from this connection. For example, once seeing a&lt;br /&gt;
request message, the Connection Manager calls a lookup function in the Storage&lt;br /&gt;
System Manager to determine whether this request can be fulfilled with&lt;br /&gt;
previously cached data either in memory or on disk. In addition, if only parts&lt;br /&gt;
of the requested data are available in the cache, the Connection Manager sends&lt;br /&gt;
a message to the actual external peer to request the missing portion of data.&lt;br /&gt;
It then assembles this data with cached data in a protocol-specific message and&lt;br /&gt;
sends it to the client. &lt;br /&gt;
&lt;br /&gt;
Since each peer may open many connections to request a single file and pCache&lt;br /&gt;
is supposed to serve a large number of peers, efficient support of concurrent&lt;br /&gt;
connections is important. A simple solution for concurrency is to use&lt;br /&gt;
multithreading, where a thread is created to handle each new connection. This&lt;br /&gt;
is simple because the states of connections are isolated from each other and&lt;br /&gt;
processed by identical threads.  The downside of this solution is increased&lt;br /&gt;
overhead in terms of creation/deletion, scheduling, and context switching of&lt;br /&gt;
threads. Some of these overheads can significantly be reduced using user-level&lt;br /&gt;
thread libraries such as Capriccio which is reported to scale to a hundred&lt;br /&gt;
thousands threads. Our current implementation of pCache uses multithreading.&lt;br /&gt;
&lt;br /&gt;
More sophisticated solutions to support efficient concurrency that employ&lt;br /&gt;
non-blocking (asynchronous) I/O operations can also be used with pCache. For&lt;br /&gt;
example, the single-process event-driven model uses only one thread to detect&lt;br /&gt;
events on multiple sockets using non-blocking socket operations such as epoll&lt;br /&gt;
and select. These events are then scheduled for an event handler to process&lt;br /&gt;
them. Unlike socket operations, asynchronous disk operations are either poorly&lt;br /&gt;
supported or not existing on most UNIX systems. To mitigate this problem,&lt;br /&gt;
multi-process event-driven models have been proposed, which can be roughly&lt;br /&gt;
categorized into asymmetric and symmetric models. The asymmetric models create&lt;br /&gt;
a single process to handle events from the network sockets, and multiple&lt;br /&gt;
processes to handle disk operations. In contrast, the symmetric models create&lt;br /&gt;
multiple event schedulers that can handle both disk and network events. The&lt;br /&gt;
above concurrency models have been proposed and used mostly for web servers.&lt;br /&gt;
Unlike our pCache, web servers may not need to provide full transparency and&lt;br /&gt;
connection splicing, which could impact these concurrency models. We are&lt;br /&gt;
currently designing and implementing new concurrency models that are more&lt;br /&gt;
suitable for P2P proxy caching systems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Storage System Manager ====&lt;br /&gt;
&lt;br /&gt;
We propose a new storage management system optimized for P2P traffic.  The&lt;br /&gt;
proposed storage system contains three modules: in-memory structures, block&lt;br /&gt;
allocation method, and replacement policy. The in-memory structures contain&lt;br /&gt;
metadata to support storing and serving byte ranges of objects, and memory&lt;br /&gt;
buffers to reduce disk I/O operations. The block allocation method organizes&lt;br /&gt;
the layout of data on the disk. The replacement policy decides which segments&lt;br /&gt;
of objects to evict from the cache in order to make room for a new requested&lt;br /&gt;
segment.&lt;br /&gt;
&lt;br /&gt;
Two structures are maintained in memory: metadata and page buffers. The&lt;br /&gt;
metadata is a two-level lookup table designed to enable efficient segment&lt;br /&gt;
lookups. The first level is a hash table keyed on object IDs; collisions are&lt;br /&gt;
resolved using common chaining techniques. Every entry points to the second&lt;br /&gt;
level of the table, which is a set of cached segments belonging to the same&lt;br /&gt;
object. Every segment entry consists of a few fields, which includes Offset for&lt;br /&gt;
the absolute segment location within the object and RefCnt for how many&lt;br /&gt;
connections are currently using this segment.  RefCnt is used to prevent&lt;br /&gt;
evicting a buffer page if there are connections currently using it. The set of&lt;br /&gt;
cached segments is implemented as a balanced (redblack) binary tree, and it&lt;br /&gt;
sorted based on the Offset field. Segments inserted into the cached segment set&lt;br /&gt;
are adjusted to be mutually disjoint. This ensures that the same data is never&lt;br /&gt;
stored more than once in the cache. Using this structure, partial hits can be&lt;br /&gt;
found in at most &amp;lt;math&amp;gt;O(log S)&amp;lt;/math&amp;gt; steps, where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the&lt;br /&gt;
number of segments in the object. This is done by searching on the offset&lt;br /&gt;
field. Segment insertions and deletions are done in logarithmic steps. Notice&lt;br /&gt;
that segments stored in the set are not necessarily contiguous.  &lt;br /&gt;
&lt;br /&gt;
The second part of the in-memory structures is the page buffers. Page buffers&lt;br /&gt;
are used to reduce disk I/O operations as well as to perform segment merging.&lt;br /&gt;
As shown in Fig. 3, we define multiple sizes of page buffers.  We pre-allocate&lt;br /&gt;
these pages in memory to avoid processing overhead caused by memory allocation&lt;br /&gt;
and deallocation. We maintain unoccupied pages of the same size in the same&lt;br /&gt;
free-page list. If peers request segments that are in the buffers, they are&lt;br /&gt;
served from memory and no disk I/O operations are issued. If the requested&lt;br /&gt;
segments are on the disk, they need to be swapped in some free memory buffers.&lt;br /&gt;
When all free buffers are used up, the least popular data in some of the&lt;br /&gt;
buffers are swapped out to the disk if this data has been modified since it was&lt;br /&gt;
brought in memory, and it is overwritten otherwise.&lt;br /&gt;
&lt;br /&gt;
Another benefit of having memory pages is for anti-interleaving.  Since the&lt;br /&gt;
cache is expected to receive many requests issued by clients at the same time.&lt;br /&gt;
Therefore, segments of different objects will be multiplexed.  That is,&lt;br /&gt;
segments of object &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; could be interleaved with segments of object&lt;br /&gt;
&amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt;.  Therefore, an anti-interleaving scheme is needed before&lt;br /&gt;
segments are swapped out to the disk. We propose to merge neighboring segments&lt;br /&gt;
together whenever there is no gap between them, which serves as our&lt;br /&gt;
anti-interleaving scheme.  Segment merging reduces the number of entries in the&lt;br /&gt;
lookup table and accelerates searching for partial hits. In addition, the&lt;br /&gt;
merging process creates larger segments, which reduces the number of disk&lt;br /&gt;
read/write operations. This is because the requested data will be read/written&lt;br /&gt;
in larger chunks. Furthermore, segments are stored on contiguous disk blocks,&lt;br /&gt;
which reduces the number of head movements and increases disk throughput.&lt;br /&gt;
Segment merging is implemented in two steps. First, we combine the memory&lt;br /&gt;
buffers belonging to the two adjacent segments. Then, if the disk blocks of the&lt;br /&gt;
old two segments are not contiguous, they are returned to the free block set,&lt;br /&gt;
and the memory buffer containing the new (large) segment is marked as modified.&lt;br /&gt;
Modified buffers are written to the disk when they are chosen to be swapped out&lt;br /&gt;
of the memory. If the disk blocks are contiguous, the buffer is marked as&lt;br /&gt;
unmodified.&lt;br /&gt;
&lt;br /&gt;
Next, we describe the organization of disk blocks. We have two types of&lt;br /&gt;
blocks: super blocks and normal blocks. Super blocks are used for persistent&lt;br /&gt;
storage of the metadata to reconstruct the lookup table after system reboots.&lt;br /&gt;
Recall that proxy caches have relaxed data integrity requirements compared to&lt;br /&gt;
regular workstations, because cached objects can be retrieved from the P2P&lt;br /&gt;
networks again. Therefore, the metadata can be written to the disk only&lt;br /&gt;
occasionally. Disk blocks are allocated to data segments in a contiguous manner&lt;br /&gt;
to increase disk throughput. Unoccupied disk blocks are maintained in a&lt;br /&gt;
free-block set, which is implemented as a red-black tree sorted on the block&lt;br /&gt;
number. When a segment of data is to be swapped from memory buffers to the&lt;br /&gt;
disk, a simple first-fit scheme is used to find a contiguous number of disk&lt;br /&gt;
blocks to store this segment. If no contiguous blocks can satisfy the request,&lt;br /&gt;
blocks nearest to the largest number of contiguous free blocks are evicted from&lt;br /&gt;
the cache to make up for the deficit. No expensive disk de-fragmentation&lt;br /&gt;
process is needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== P2P Traffic Processor ====&lt;br /&gt;
pCache needs to communicate with peers from different P2P systems. For each&lt;br /&gt;
supported P2P system, the P2P Traffic Processor provides three modules to&lt;br /&gt;
enable this communication: Parser, Composer, and Analyzer. The Parser performs&lt;br /&gt;
functions such as identifying control and payload messages, and extracting&lt;br /&gt;
messages that could be of interest to the cache such as object request&lt;br /&gt;
messages. The Composer constructs properly-formatted messages to be sent to&lt;br /&gt;
peers. The Analyzer is a place holder for any auxiliary functions that may need&lt;br /&gt;
to be performed on P2P traffic from different systems. For example, in&lt;br /&gt;
BitTorrent the Analyzer infers information (piece length) needed by pCache that&lt;br /&gt;
is not included in messages exchanged between peers. &lt;br /&gt;
&lt;br /&gt;
To store and serve P2P traffic, the cache needs to perform several functions&lt;br /&gt;
beyond identifying the traffic. These functions are provided by the P2P Traffic&lt;br /&gt;
Processor, which has three components: Parser, Composer, and Analyzer. By&lt;br /&gt;
inspecting the byte stream of the connection, the Parser determines the&lt;br /&gt;
boundaries of messages exchanged between peers, and it extracts the request and&lt;br /&gt;
response messages that are of interest to the cache. The Parser returns the ID&lt;br /&gt;
of the object being downloaded in the session, as well as the requested byte&lt;br /&gt;
range (start and end bytes). The byte range is relative to the whole object.&lt;br /&gt;
The Composer prepares protocol-specific messages, and may combine data stored&lt;br /&gt;
in the cache with data obtained from the network into one message to be sent to&lt;br /&gt;
a peer.&lt;br /&gt;
&lt;br /&gt;
=== Browse and Download Code === &lt;br /&gt;
&lt;br /&gt;
We are continuously improve our pCache implementation. The latest development branch can be browsed on our subversion page at [https://cs-svn.cs.surrey.sfu.ca/nsl/browser/p2pcache our subversion server] &lt;br /&gt;
&lt;br /&gt;
pCache code is released in two parts: kernel source and application. The Linux&lt;br /&gt;
kernel consists of all required patches to support transparent proxy, which&lt;br /&gt;
simplifies setting up required environment. This patched kernel contains code&lt;br /&gt;
from [http://www.kernel.org/ mainstream Linux kernel], [http://www.netfilter.org/ netfilter], [&lt;br /&gt;
http://www.balabit.com/support/community/products/tproxy/ tproxy], and [http://www.linux-l7sw.org/ layer7switch]. The pCache application implements components described above. Moreover, a patched iptables is also provided that takes additional arguments supported by [http://www.balabit.com/support/community/products/tproxy/ tproxy].&lt;br /&gt;
&lt;br /&gt;
* Linux Kernel [[media:linux-2.6.23.tgz]]&lt;br /&gt;
* pCache Snapshot [[media:pCache-0.0.1.tgz]]&lt;br /&gt;
* iptables (patched for additional araguments) [[media:iptables-1.4.0rc1.tgz]]&lt;br /&gt;
&lt;br /&gt;
To setup your pCache system, please follow these simple steps:&lt;br /&gt;
* Download the Linux kernel, compile and install it. Notice that, this tar file also include a sample .config file.&lt;br /&gt;
* Download the patched iptables, compile and install it. (Please see the INSTALL file included in the tar file for installation instruction)&lt;br /&gt;
* Download the pCache source code, compile it for a binary called pCache.&lt;br /&gt;
&lt;br /&gt;
To run pCache, first configure forwarding table. For example, we use the following script to configure our forwarding table:&lt;br /&gt;
&lt;br /&gt;
 iptables -t mangle -N DIVERT&lt;br /&gt;
 # bypass low ports&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp --sport 1:1024 -j ACCEPT&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp --dport 1:1024 -j ACCEPT&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT&lt;br /&gt;
 iptables -t mangle -A PREROUTING -p tcp -j TPROXY --tproxy-mark 0x1/0x1 --on-port 7072&lt;br /&gt;
 iptables -t mangle -A DIVERT -j MARK --set-mark 1&lt;br /&gt;
 iptables -t mangle -A DIVERT -j ACCEPT&lt;br /&gt;
 ip rule add fwmark 1 lookup 100&lt;br /&gt;
 ip route add local 0.0.0.0/0 dev lo table 100&lt;br /&gt;
 # disable TCP features that are not suppoert by the TCP splicing module&lt;br /&gt;
 sysctl -w net.ipv4.tcp_sack=0&lt;br /&gt;
 sysctl -w net.ipv4.tcp_dsack=0&lt;br /&gt;
 sysctl -w net.ipv4.tcp_window_scaling=0&lt;br /&gt;
 #sysctl -w net.ipv4.tcp_tw_recycle=1&lt;br /&gt;
 #echo 8 &amp;gt; /proc/sys/kernel/printk&lt;br /&gt;
&lt;br /&gt;
Then, configure the conf.txt file under the pCache directory. See comments in&lt;br /&gt;
conf.txt for purpose of each setting. The most important ones are briefly&lt;br /&gt;
described below. We notice that many settings are for experimental usage and&lt;br /&gt;
are not useful in actual deployment.&lt;br /&gt;
# BLOCK_SIZE and BLOCK_NUM determine the layout of harddisk as well as the capacity. The resulted harddisk size should never exceed the actual size.&lt;br /&gt;
# ACCESOR_TYPE describes the harddisk scheme that will be used. The following harddisk schemes are supports: flat directory (1), signal file on file system (4), and signal file on raw disk (5). Other types are experimental only. &lt;br /&gt;
# ROOT_DIR or DEV_NAME defines the location of file system. DEV_NAME is used for raw disk harddisk scheme, and ROOT_DIR is for all other. Examples of DEV_NAME include /dev/hda2 and /dev/sda1. Examples of ROOT_DIR include /mnt/pCache, which will need to be mounted first.&lt;br /&gt;
# SUBNET and NETMASK define the local subnet. pCache only inspect outgoing requests. The incoming requests are always forwarded.&lt;br /&gt;
# MAX_FILLED_SIZE and MIN_FILLED_SIZE determine when to call cache replacement routine and when to stop.&lt;br /&gt;
&lt;br /&gt;
After setting up the conf.txt file, run pCache in command-line. Then, use a&lt;br /&gt;
browser to monitor your pCache status by connecting to http://&amp;lt;your-ip&amp;gt;:8000. &lt;br /&gt;
Also note a log.txt file will be generated.&lt;br /&gt;
&lt;br /&gt;
Enjoy. &lt;br /&gt;
&lt;br /&gt;
=== Future Enhancements  ===&lt;br /&gt;
&lt;br /&gt;
* Revisit Gnutella traffic parser/composer. In particular, we need to properly handle cancel messages.  To achieve this, the cache manager needs to allow partially received segments (i.e., changing segment length of admitted data messages). &lt;br /&gt;
&lt;br /&gt;
* Define a stateful connection class, rewrite the connection manager into a event handler. Also use epoll to improve scalability.&lt;br /&gt;
&lt;br /&gt;
* Adopt simpler segment matching algorithm. For every incoming request, we either request it in its entirety or we don't request it at all. Current partial request code is over-complicated. Especially, when multiple P2P protocols are considered. &lt;br /&gt;
&lt;br /&gt;
* Implement traffic identifier as a [http://www.netfilter.org/ Netfilter] module and implement reverse TCP splicing.&lt;br /&gt;
&lt;br /&gt;
=== Feedback and Comments ===&lt;br /&gt;
&lt;br /&gt;
We welcome all comments and suggestions. You can enter your comments [http://www.sfu.ca/~cha16/feedback.html here].&lt;br /&gt;
&lt;br /&gt;
=== Related Caching Systems and Commercial Products ===&lt;br /&gt;
&lt;br /&gt;
* [[http://www.oversi.com/index.php?option=com_content&amp;amp;task=view&amp;amp;id=38&amp;amp;Itemid=114 OverCache P2P Caching and Delivery Platform]] Oversi's MSP platform realizes multi-service caching for P2P and other applications. In terms of P2P caching, MSP takes a quite different approach than pCache: An MSP device actively participates in P2P networks. That is, MSP acts as a ultra-peer that only serve peers within the deployed ISP. We believe this approach negatively impacts fairness in many P2P networks, such as BitTorrent, which employ algorithms to eliminate free-rider problem. In fact, no peers in ISPs with Oversi's MSP deployed will ever upload anymore, because they expect to get the data free from the MSP platform. Once number of free-riders increases, the P2P network performance degrades, which in turns affects P2P users all over the world.&lt;br /&gt;
&lt;br /&gt;
* [[http://www.peerapp.com/products-ultraband.aspx PeerApp UltraBand Family]] Unlike OverCache, PeerApp's products support transparent caching of P2P traffic. Supported P2P protocols are BitTorrent, Gnutella, eDonkey, and FastTrack (the last two are no more popular). However, like OverCache, PeerApp's products do not support cross-protocol caching; a file cached through a BitTorrent download will not be served to a Gnutella user requesting the same file (or vice versa). Currently, we have provided basic means for supporting cross-protocol caching, and this feature will be fully added to the next version of our prototype software.&lt;br /&gt;
&lt;br /&gt;
Not to mention that our prototype software is open-source, while the above products are commercial and very expensive.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== P2P Traffic Traces == &lt;br /&gt;
&lt;br /&gt;
* If you are interested in the traces, please send us an email along with a brief description of your research and the university/organization you are affiliated with. Brief description of our traces can be found in this [http://nsl.cs.surrey.sfu.ca/projects/p2p/traces_readme.txt readme.txt] file.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1574</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1574"/>
		<updated>2008-03-04T20:47:07Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*P2 download F1 from P1.&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead, or when the internet connection of the peer is cut:&lt;br /&gt;
#*The peer keeps re-trying to connect. When some tracker came up, or when the peer was re-connected to the Internet, the connects to a tracker.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
# New Policy:&lt;br /&gt;
#* Fill all respective fields in the Policy Window and press &amp;quot;Save&amp;quot; Button. Check if the database's Policy table is filled with a new entry that has just been inserted. Also, note that the group is inserted in &amp;quot;Groups&amp;quot; Tab. In addition, the combo box in the 'Content Importer' tab is also updated with the new entry. The combo box in the third column of 'Media Files' tab's table is also updated.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
# Monitor: If a group / file is added / edited / deleted using Content Manager, it is not updated on all the other running monitoring applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1569</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1569"/>
		<updated>2008-03-04T19:48:46Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*P2 download F1 from P1.&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1541</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1541"/>
		<updated>2008-03-04T03:13:39Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1495</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1495"/>
		<updated>2008-03-01T20:14:31Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1491</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1491"/>
		<updated>2008-03-01T06:50:30Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: Results of a comprehensive test of the new version of tracker (new replication mechanism) is added.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system.&lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Publications&amp;diff=1440</id>
		<title>Publications</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Publications&amp;diff=1440"/>
		<updated>2008-02-27T18:49:11Z</updated>

		<summary type="html">&lt;p&gt;Kianoosh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* '''Journals/Magazines'''&lt;br /&gt;
** C. Hsu and M. Hefeeda, Partitioning of Multiple Fine-Grained Scalable Video Sequences Concurrently Streamed to Heterogeneous Clients, ''IEEE Transactions on Multimedia'', Accepted November 2007.&lt;br /&gt;
** M. Hefeeda and O. Saleh, Traffic Modeling and Proportional Partial Caching for Peer-to-Peer Systems, ''IEEE/ACM Transactions on Networking'', Accepted October 2007.&lt;br /&gt;
** M. Hefeeda and C. Hsu,  [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap07_fgs.pdf Rate-Distortion Optimized Streaming of Fine-Grained Scalable Video Sequences], ''ACM Transactions on Multimedia Computing, Communications, and Applications'', Accepted June 2007.&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap06.pdf On the Accuracy and Complexity of Rate-Distortion Models for FGS-encoded Video Sequences], ''ACM Transactions on Multimedia Computing, Communications, and Applications'', Accepted February 2007.  &lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tom08.pdf Optimal Coding of Multi-layer and Multi-version Video Streams], ''IEEE Transactions on Multimedia'', 10(1), pp. 121--131, January 2008.&lt;br /&gt;
** B. Jules and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/pCDN07.pdf pCDN: Peer-assisted Content Distribution Network], ''CBC/Radio-Canada Technology Review Magazine'', Issue 4, pp. 1--14, July 2007. (Invited, also published in [http://www.cs.sfu.ca/~mhefeeda/Papers/pCDN07_french.pdf French]).&lt;br /&gt;
** Y. Tu, J. Sun, M. Hefeeda, Y. Xia, S. Prabhakar, [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap05.pdf An Analytical Study of Peer-to-Peer Media Streaming Systems], ''ACM Transactions on Multimedia Computing,  Communications, and Applications'', 1(4),  pp. 354--376, November 2005.&lt;br /&gt;
** M. Hefeeda, A. Habib, D. Xu, B. Bhargava, B. Botev, [http://www.cs.sfu.ca/~mhefeeda/Papers/mmsj05.pdf CollectCast: A Peer-to-Peer Service for Media Streaming], ''ACM/Springer Multimedia Systems Journal'', 11(1), pp. 68--81, November 2005.&lt;br /&gt;
** C. Schuba, M. Hefeeda, J. Goldschmidt, M. Speer, [http://www.cs.sfu.ca/~mhefeeda/Papers/ieeeComp04Final.pdf Scaling Network Services Using Programmable Network Devices],  ''IEEE Computer'', pp. 52--60, April 2005.&lt;br /&gt;
** M. Hefeeda,  B. Bhargava,  D. Yau,  [http://www.cs.sfu.ca/~mhefeeda/Papers/comnet04.pdf A Hybrid  Architecture for Cost-Effective On-Demand  Media Streaming], ''Elsevier Computer Networks'',  44(3), pp. 353--382, February 2004.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''Conferences/Workshops'''&lt;br /&gt;
** M. Hefeeda and H. Ahmadi, [http://www.cs.sfu.ca/~mhefeeda/Papers/icnp07.pdf A Probabilistic Coverage Protocol for Wireless Sensor Networks], In Proc. of IEEE International Conference on Network Protocols (ICNP'07), Beijing, China, October 2007.   (Acceptance: 15%) Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/icnp07.ppt ppt]  [http://www.cs.sfu.ca/~mhefeeda/Talks/icnp07.pdf pdf]&lt;br /&gt;
** M. Hefeeda and H. Ahmadi, [http://www.cs.sfu.ca/~mhefeeda/Papers/mass07.pdf Network Connectivity under Probabilistic Communication Models in Wireless Sensor Networks], In Proc. of IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS'07),  Pisa, Italy, October 2007.   (Acceptance: 25%)&lt;br /&gt;
** M. Hefeeda and M. Bagheri, [http://www.cs.sfu.ca/~mhefeeda/Papers/mass-ghs07.pdf Wireless Sensor Networks for Early Detection of Forest Fires], In Proc. of International Workshop on Mobile Ad hoc and Sensor Systems for Global and Homeland Security (MASS-GHS’07), in conjunction with IEEE MASS’07,  Pisa, Italy, October 2007.  Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/mass-ghs07.ppt ppt][http://www.cs.sfu.ca/~mhefeeda/Talks/mass-ghs07.pdf pdf]&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/iwqos07.pdf Structuring Multi-Layer Scalable Streams to Maximize Client-Perceived Quality], In Proc. of IEEE International Workshop on Quality of Service (IWQoS'07), pp. 182--187, Chicago, IL, June 2007. &lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/nossdav07.pdf Optimal Partitioning of Fine-Grained Scalable Video Streams], In Proc. of ACM International Workshop on Network and Operating Systems Support for Digital Audio &amp;amp; Video (NOSSDAV'07), pp. 63--68, Urbana-Champion, IL, June 2007.   Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/nossdav07.ppt ppt]  [http://www.cs.sfu.ca/~mhefeeda/Talks/nossdav07.pdf pdf].&lt;br /&gt;
** M. Hefeeda and M. Bagheri, [http://www.cs.sfu.ca/~mhefeeda/Papers/infocom07.pdf Randomized k-Coverage Algorithms For Dense Sensor Networks], In Proc. of  IEEE INFOCOM 2007 Minisymposium, pp. 2376--2380, Anchorage, AK, May 2007.  Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/infocom07.ppt ppt]  [http://www.cs.sfu.ca/~mhefeeda/Talks/infocom07.pdf pdf].  (Acceptance: 25%)&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/mmcn07.pdf Optimal Bit Allocation for Fine-Grained Scalable Video Sequences in Distributed Streaming Environments], In Proc. of 14th ACM/SPIE Multimedia Computing and Networking Conference (MMCN'07), pp. 1--12, San Jose, CA, Jan 2007.    Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/mmcn07.ppt ppt]  [http://www.cs.sfu.ca/~mhefeeda/Talks/mmcn07.pdf pdf].  (Acceptance: 30%)&lt;br /&gt;
** O. Saleh and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/icnp06.pdf Modeling and Caching of Peer-to-Peer Traffic], In Proc. of IEEE International Conference on Network Protocols (ICNP'06), pp. 249--258, Santa Barbara, CA, November 2006.  Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/icnp06.ppt ppt]  [http://www.cs.sfu.ca/~mhefeeda/Talks/icnp06.pdf pdf].   (Acceptance: 14%)&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/iccta06.pdf Rate-Distortion Models for FGS-encoded Video Sequences], In Proc. of IEEE International Conference on Computer Theory and Applications (ICCTA’06), pp. 334--337, Alexandria, Egypt, September 2006.&lt;br /&gt;
** Y. Tu, M. Hefeeda, Y. Xia, and S. Prabhakar, [http://www.cs.sfu.ca/~mhefeeda/Papers/dexa05.pdf Control-based Quality Adaptation in Data Stream Management Systems], In Proc. of  16th International Conference on Database and Expert Systems Applications (DEXA'05), Copenhagen, Denmark, August 2005. Published in Springer-Verlag  Lecture Notes in Computer Science,  LNCS 3588, pp. 746--755,  September 2005.&lt;br /&gt;
** M. Hefeeda,  A. Habib, B. Botev, D. Xu, and B. Bhargava,  [http://www.cs.sfu.ca/~mhefeeda/Papers/mm03.pdf PROMISE:  Peer-to-Peer Media Streaming  Using CollectCast],  In Proc. of  ACM Multimedia 2003, pages 45--54, Berkeley, CA,  November 2003. Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/mm03.ppt ppt] [http://www.cs.sfu.ca/~mhefeeda/Talks/mm03.pdf pdf].   (Acceptance: 17%)&lt;br /&gt;
** M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/mm03-doctoral.pdf A Framework for Cost-Effective Peer-to-Peer Content Distribution],   In Proc. of  ACM Multimedia 2003,  Doctoral Symposium, pages 642--643, Berkeley, CA,  November 2003. Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/mm03_doctoral.ppt ppt] [http://www.cs.sfu.ca/~mhefeeda/Talks/mm03_doctoral.pdf pdf].&lt;br /&gt;
** M. Hefeeda,  A. Habib, D. Xu,  and B. Bhargava,  [http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm03Poster.pdf CollectCast: A Tomography-Based Network Service for Peer-to-Peer Streaming],  In ACM SIGCOMM'03 Poster Session, Karlsruhe, Germany, August 2003.  [http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm03Abstract.pdf Abstract] [http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm03Poster.pdf pdf] [http://www.cs.sfu.ca/~mhefeeda/Papers/sigcomm03Poster.ppt ppt].  (Acceptance: 29%)&lt;br /&gt;
** A. Habib, M. Hefeeda, and B. Bhargava,  [http://www.cs.sfu.ca/~mhefeeda/Papers/ndss03.pdf Detecting Service Violations and DoS Attacks],  In  Proc. of Network and Distributed Systems Security Symposium  (NDSS'03), pages 177--189, San Diego, CA,  February 2003. (Acceptance: 21%)&lt;br /&gt;
** D. Xu, M. Hefeeda, S. Hambrush, and B. Bhargava,  [http://www.cs.sfu.ca/~mhefeeda/Papers/icdcs02.pdf On Peer-to-Peer Media Streaming],  In Proc. of  IEEE  International Conference on Distributed Computing Systems (ICDCS'02), pages 363--371, Vienna, Austria, July 2002.  (Acceptance: 18%)&lt;br /&gt;
** M. Hefeeda and  B. Bhargava,  [http://www.cs.sfu.ca/~mhefeeda/Papers/ftdcs03.pdf On-Demand  Media Streaming  Over  the Internet],  In Proc. of  9th IEEE  Workshop on  Future Trends of Distributed Computing Systems (FTDCS'03),  pages 279--285, San Juan, Puerto Rico, May,  2003.   Slides [http://www.cs.sfu.ca/~mhefeeda/Talks/ftdcs03.ppt ppt] [http://www.cs.sfu.ca/~mhefeeda/Talks/ftdcs03.pdf pdf]. &lt;br /&gt;
** Y. Lu , B. Bhargava, and M. Hefeeda,   [http://www.cs.sfu.ca/~mhefeeda/Papers/hhn.pdf An Architecture for Secure Wireless  Networking],  In Proc. of Workshop on Reliable and Secure Applications in  Mobile Environment, New Orleans, October 2001.&lt;br /&gt;
** R. A. Ammar, M. Hefeeda, H. Sholl, D. Smarkusky, and B. MacKay,  [http://www.cs.sfu.ca/~mhefeeda/Papers/pdcs00.pdf Two-Moment Analysis of a Computation's Performance], International Conf. on Parallel and Distributed Computing and Systems (PDCS'2000), Las Vegas, August 2000.&lt;br /&gt;
** R. A. Ammar, T. A. Fergany, A. El-Desouky, and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/pdcs96.pdf Heuristic Scheduling Algorithms to Access the Critical Section in Shared-Memory Environment], International Conf. on Parallel and Distributed Computing and Systems (PDCS'96), DiJon, France, Sept. 1996.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''Technical Reports and Manuscripts Under Review'''&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2007_13.pdf Optimal Coding of Multi-layer and Multi-version Video Streams], Technical Report TR 2007-13, School of Computing Science, Simon Fraser University, May 2007. Accepted in IWQoS'07.  [http://www.cs.sfu.ca/~mhefeeda/Papers/iwqos07.pdf Conference Version].&lt;br /&gt;
** M. Hefeeda and H. Ahmadi, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2007_10.pdf Network Connectivity under Probabilistic Communication Models in Sensor Networks], Technical Report TR 2007-10, School of Computing Science, Simon Fraser University, April 2007. Accepted in MASS'07. [http://www.cs.sfu.ca/~mhefeeda/Papers/mass07.pdf Conference Version].&lt;br /&gt;
** Mohamed Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2007_08.pdf Forest Fire Modeling and Early Detection using Wireless Sensor Networks], Technical Report TR 2007-08, School of Computing Science, Simon Fraser University, Updated August 2007. Accepted in MASS-GHS'07.  [http://www.cs.sfu.ca/~mhefeeda/Papers/mass-ghs07.pdf Conference Version].&lt;br /&gt;
** M. Hefeeda and H. Ahmadi, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2006_21.pdf Probabilistic Coverage in Wireless Sensor Networks], Technical Report TR 2006-21, School of Computing Science, Simon Fraser University, Updated March  2007.  Accepted in ICNP'07. [http://www.cs.sfu.ca/~mhefeeda/Papers/icnp07.pdfConference Version].&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2007_02.pdf Partitioning of Multiple Fine-Grained Scalable Video Sequences Concurrently Streamed to Heterogeneous Clients], Technical Report TR 2007-02, School of Computing Science, Simon Fraser University, February 2007. Accepted in NOSSDAV'07. [http://www.cs.sfu.ca/~mhefeeda/Papers/nossdav07.pdf Conference Version].&lt;br /&gt;
** M. Hefeeda and M. Bagheri, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2006_22.pdf Efficient k-Coverage Algorithms for Wireless Sensor Networks], Technical Report TR 2006-22, School of Computing Science, Simon Fraser University, August 2006 (Updated January 2007). Accepted in INFOCOM Minisymposium 2007. [http://www.cs.sfu.ca/~mhefeeda/Papers/infocom07.pdf Conference Version].&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2006_12.pdf On the Accuracy and Complexity of Rate-Distortion Models for FGS-encoded Video Sequences], Technical Report TR 2006-12, School of Computing Science, Simon Fraser University, May 2006. Accepted in ACM TOCCAP. [http://www.cs.sfu.ca/~mhefeeda/Papers/tomccap06.pdf Journal Version].&lt;br /&gt;
** C. Hsu and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2006_20.pdf Optimal Bit Allocation for Fine-Grained Scalable Video Sequences in Distributed Streaming Environments] , Technical Report TR 2006-20, School of Computing Science, Simon Fraser University, July 2006. Accepted in MMCN'07. [http://www.cs.sfu.ca/~mhefeeda/Papers/mmcn07.pdf Conference Version].&lt;br /&gt;
** O. Saleh and M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/tr2006_11.pdf Modeling and Caching of Peer-to-Peer Traffic]. Technical Report TR 2006-11, School of Computing Science, Simon Fraser University, May 2006. Accepted in ICNP'06.  [http://www.cs.sfu.ca/~mhefeeda/Papers/icnp06.pdf Conference Version].&lt;br /&gt;
** M. Hefeeda, [http://www.cs.sfu.ca/~mhefeeda/Papers/p2pSurvey.pdf Peer-to-Peer Systems: A Comprehensive Survey], Work in progress, September 2004.&lt;br /&gt;
** M. Hefeeda,  P. Afeche, B. Bhargava,   [http://www.cs.sfu.ca/~mhefeeda/Papers/p2pecon2.pdf Economics of a Collaborative  Peer-to-Peer  Infrastructure for Content  Distribution],  CS-TR 03-015, Purdue University, May 2003.&lt;br /&gt;
** M. Hefeeda,  A. Habib, B. Bhargava,  [http://www.cs.sfu.ca/~mhefeeda/Papers/p2pecon1.pdf Cost-Profit Analysis of a Peer-to-Peer  Media  Streaming Architecture], CERIAS TR 2002-37, Purdue University, October 2002.&lt;br /&gt;
** M. Hefeeda, B. Bhargava, [http://www.cs.sfu.ca/~mhefeeda/Papers/OnMobileCodeSecurity.pdf On Mobile Code Security], CERIAS TR 2001-46,  Purdue University, October 2001.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''Patent'''&lt;br /&gt;
** C. Schuba, M. Hefeeda, J. Goldschmidt, M. Speer, Discovering Services Supported by Flow Enforcement Devices Through Subgraph Matching, US Patent Pending, Filed May 2005.&lt;/div&gt;</summary>
		<author><name>Kianoosh</name></author>
	</entry>
</feed>