<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-CA">
	<id>https://nmsl.cs.sfu.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nitin.chiluka</id>
	<title>NMSL - User contributions [en-ca]</title>
	<link rel="self" type="application/atom+xml" href="https://nmsl.cs.sfu.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nitin.chiluka"/>
	<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php/Special:Contributions/Nitin.chiluka"/>
	<updated>2026-04-07T08:13:15Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.1</generator>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Release&amp;diff=1952</id>
		<title>pCDN:Release</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Release&amp;diff=1952"/>
		<updated>2008-05-06T16:20:27Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We list the features, installers, and instructions for each release on this page. The complete list of features apriori Version 1, please see [[pCDN:Feature | here]].&lt;br /&gt;
&lt;br /&gt;
= Version 1 =&lt;br /&gt;
== Features ==&lt;br /&gt;
* '''Reliable Server(s)''': &lt;br /&gt;
** ''Recovery from server crashes'': We preserve the on-line client state across pCDN server restarts. This means that the pCDN server can be restarted anytime without scarifying the connected clients. Administrators may restart the pCDN server for several reasons, such as power outage, hardware upgrades, or operating system updates. This new feature minimizes the service downtime without deploying additional hardware. &lt;br /&gt;
** ''Providing online backup servers'': We support one-to-many replication for an even more robust system. CBC many deploy one or more secondary servers, which will take over the responsibility of the primary server when failures are detected. We support hot switchover, which means all the secondary servers are always initialized and can become the next primary server immediately after failures.&lt;br /&gt;
* '''Geo-Fencing''': We support city-level geo-fencing in North America and country-level in other regions. Administrators configure the geo-fencing database through a user-friendly GUI, which can be remotely run at any workstations.&lt;br /&gt;
* '''Unified Admin Tool''': We integrate interface for server monitoring, content admin, and geo-fencing into a unified GUI. Administrators can remotely run this user interface and complete all their daily tasks in it.&lt;br /&gt;
* '''Optimized Sender-Receiver Matching''': We support geolocation-aware sender lookups, which leverage on existing Geo-IP database to search for the closest senders for individual receivers. Our  algorithm:&lt;br /&gt;
** ''Enhances download speeds''.&lt;br /&gt;
** ''Reduces load on backbone network''.&lt;br /&gt;
* ''' Performance Tuning and Scalability Testing''': &lt;br /&gt;
** ''Revised protocol design and implementation'': We've refined our protocol design, as well as implementation, for a more responsive and robust content delivery system. &lt;br /&gt;
** ''Tested for large scale clients and replicated servers:'' We've rigorously tested our system with several hundred clients from all over the world. We've also validated our replicated server implementation with as many as eight servers.&lt;br /&gt;
* '''Software Management and Support''': We've set up web-based info-sharing and bug-tracking systems.&lt;br /&gt;
** ''Wiki system providing details of the pCDN system at: [http://nsl.cs.sfu.ca/wiki/index.php/pCDN Wiki].&lt;br /&gt;
** ''Bug Tracking and Feature Suggestion System at: [http://nsl.cs.sfu.ca/bug/ Bugzilla].&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
=== Candidate 1 ===&lt;br /&gt;
Please find the initial release candidates as follows.&lt;br /&gt;
* Server Installer [[media:pCDN_Server_V1_RC1.exe]] &lt;br /&gt;
* Client Installer [[media:pCDN_Client_V1_RC1.exe]]&lt;br /&gt;
=== Candidate 2 ===&lt;br /&gt;
Please find the initial release candidates as follows.&lt;br /&gt;
* Server Installer [[media:pCDN_Server_V1_RC1.exe]] &lt;br /&gt;
* Client Installer [[media:pCDN_Client_V1_RC2.exe]]&lt;br /&gt;
== Instructions ==&lt;br /&gt;
The installation instructions for this release can be found [[pCDN:Installation | here]]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Release&amp;diff=1951</id>
		<title>pCDN:Release</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Release&amp;diff=1951"/>
		<updated>2008-05-06T16:19:08Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We list the features, installers, and instructions for each release on this page. The complete list of features apriori Version 1, please see [[pCDN:Feature | here]].&lt;br /&gt;
&lt;br /&gt;
= Version 1 =&lt;br /&gt;
== Features ==&lt;br /&gt;
* '''Reliable Server(s)''': &lt;br /&gt;
** ''Recovery from server crashes'': We preserve the on-line client state across pCDN server restarts. This means that the pCDN server can be restarted anytime without scarifying the connected clients. Administrators may restart the pCDN server for several reasons, such as power outage, hardware upgrades, or operating system updates. This new feature minimizes the service downtime without deploying additional hardware. &lt;br /&gt;
** ''Providing online backup servers'': We support one-to-many replication for an even more robust system. CBC many deploy one or more secondary servers, which will take over the responsibility of the primary server when failures are detected. We support hot switchover, which means all the secondary servers are always initialized and can become the next primary server immediately after failures.&lt;br /&gt;
* '''Geo-Fencing''': We support city-level geo-fencing in North America and country-level in other regions. Administrators configure the geo-fencing database through a user-friendly GUI, which can be remotely run at any workstations.&lt;br /&gt;
* '''Unified Admin Tool''': We integrate interface for server monitoring, content admin, and geo-fencing into a unified GUI. Administrators can remotely run this user interface and complete all their daily tasks in it.&lt;br /&gt;
* '''Optimized Sender-Receiver Matching''': We support geolocation-aware sender lookups, which leverage on existing Geo-IP database to search for the closest senders for individual receivers. Our  algorithm:&lt;br /&gt;
** ''Enhances download speeds''.&lt;br /&gt;
** ''Reduces load on backbone network''.&lt;br /&gt;
* ''' Performance Tuning and Scalability Testing''': &lt;br /&gt;
** ''Revised protocol design and implementation'': We've refined our protocol design, as well as implementation, for a more responsive and robust content delivery system. &lt;br /&gt;
** ''Tested for large scale clients and replicated servers:'' We've rigorously tested our system with several hundred clients from all over the world. We've also validated our replicated server implementation with as many as eight servers.&lt;br /&gt;
* '''Software Management and Support''': We've set up web-based info-sharing and bug-tracking systems.&lt;br /&gt;
** ''Wiki system providing details of the pCDN system at: [http://nsl.cs.sfu.ca/wiki/index.php/pCDN Wiki].&lt;br /&gt;
** ''Bug Tracking and Feature Suggestion System at: [http://nsl.cs.sfu.ca/bug/ Bugzilla].&lt;br /&gt;
&lt;br /&gt;
== Software ==&lt;br /&gt;
=== Candidate 1 ===&lt;br /&gt;
Please find the initial release candidates as follows.&lt;br /&gt;
* Server Installer [[media:pCDN_Server_V1_RC1.exe]] &lt;br /&gt;
* Client Installer [[media:pCDN_Client_V1_RC2.exe]]&lt;br /&gt;
== Instructions ==&lt;br /&gt;
The installation instructions for this release can be found [[pCDN:Installation | here]]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=File:pCDN_Client_V1_RC2.exe&amp;diff=1950</id>
		<title>File:pCDN Client V1 RC2.exe</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=File:pCDN_Client_V1_RC2.exe&amp;diff=1950"/>
		<updated>2008-05-06T16:13:43Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1938</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1938"/>
		<updated>2008-04-29T16:53:00Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Proposed Algorithm Simulation'''==&lt;br /&gt;
=== '''Experimental Setup''' ===&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
=== '''ISP Friendly Algorithm''' ===&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Ideas''' ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the Canada's AS graph: 13 AS Hops&lt;br /&gt;
&lt;br /&gt;
== Important Dates ==&lt;br /&gt;
*Conferences&lt;br /&gt;
** May 2: SIGCOMM&lt;br /&gt;
** Aug 9: InfoComm&lt;br /&gt;
&lt;br /&gt;
== '''References and Links''' ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1936</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1936"/>
		<updated>2008-04-25T20:43:09Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Proposed Algorithm Simulation'''==&lt;br /&gt;
=== '''Experimental Setup''' ===&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
=== '''ISP Friendly Algorithm''' ===&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Ideas''' ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== Important Dates ==&lt;br /&gt;
*Conferences&lt;br /&gt;
** May 2: SIGCOMM&lt;br /&gt;
** Aug 9: InfoComm&lt;br /&gt;
&lt;br /&gt;
== '''References and Links''' ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1935</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1935"/>
		<updated>2008-04-25T20:42:47Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Proposed Algorithm Simulation'''==&lt;br /&gt;
=== '''Experimental Setup''' ===&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
=== '''ISP Friendly Algorithm''' ===&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== '''Ideas''' ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== Important Dates ==&lt;br /&gt;
#Conferences&lt;br /&gt;
* May 2: SIGCOMM&lt;br /&gt;
* Aug 9: InfoComm&lt;br /&gt;
&lt;br /&gt;
== '''References and Links''' ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1934</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1934"/>
		<updated>2008-04-25T20:33:36Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Proposed Algorithm Simulation'''==&lt;br /&gt;
=== '''Experimental Setup''' ===&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
=== '''ISP Friendly Algorithm''' ===&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1933</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1933"/>
		<updated>2008-04-25T20:33:03Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Proposed Algorithm Simulation==&lt;br /&gt;
=== Experimental Setup ===&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
=== '''ISP Friendly Algorithm''' ===&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1932</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1932"/>
		<updated>2008-04-25T20:29:13Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Proposed Algorithm Simulation==&lt;br /&gt;
* Experimental Setup:&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
'''ISP Friendly Algorithm'''&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1931</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1931"/>
		<updated>2008-04-25T20:27:52Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Proposed Algorithm Simulation==&lt;br /&gt;
* Experimental Setup:&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
'''ISP Friendly Algorithm'''&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
i.&lt;br /&gt;
{&lt;br /&gt;
Insert(Peer peer){&lt;br /&gt;
&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);    //obtain the AS this peer belongs to&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If (ASNum does not exist in ASMap)   // check if this AS already exists&lt;br /&gt;
  	    ASMap.insert(ASNum);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ASMap[ASNum].peerlist.add(peer);   // now add this peer to that AS&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ii.&lt;br /&gt;
Delete(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
//remove the peer from that AS&lt;br /&gt;
ASMap[ASNum].peerlist.remove(peer);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
iii.&lt;br /&gt;
Lookup(Peer peer){&lt;br /&gt;
declare RetPeerlist;&lt;br /&gt;
declare retPeerCount;&lt;br /&gt;
&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// Get a list of all ASes that are in the data structure&lt;br /&gt;
ASlist = ASMap.getASList();&lt;br /&gt;
&lt;br /&gt;
// and sort these ASes based on the distance from ASNum&lt;br /&gt;
// to each of the ASes in ASlist, using the matrix&lt;br /&gt;
SortedASlist = ASlist.sort(ASNum , ShortestPathMatrix[][]);&lt;br /&gt;
&lt;br /&gt;
count = 0;&lt;br /&gt;
&lt;br /&gt;
// Noting the number of AS hops&lt;br /&gt;
ashops = 0;&lt;br /&gt;
&lt;br /&gt;
for(i=0;i&amp;lt;SortedASlist.size;i++){&lt;br /&gt;
 	  j=0;&lt;br /&gt;
 	  while(count&amp;lt;retPeerCount &amp;amp;&amp;amp; j&amp;lt;SortedASlist[i].peerlist.size){&lt;br /&gt;
 		Peer tmp =  SortedASlist[i].peerlist[j++];&lt;br /&gt;
      	RetPeerlist.add(tmp);&lt;br /&gt;
&lt;br /&gt;
        // get the AS this peer belongs to&lt;br /&gt;
		  tmpASNum = SortedASlist[i];&lt;br /&gt;
        ashops += ShortestPathMatrix[ASNum][tmpASNum];&lt;br /&gt;
		  count++;&lt;br /&gt;
  }&lt;br /&gt;
        if(count&amp;gt;=retPeerCount)&lt;br /&gt;
              break;&lt;br /&gt;
}&lt;br /&gt;
return RetPeerlist;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1930</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1930"/>
		<updated>2008-04-25T20:26:44Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Proposed Algorithm Simulation==&lt;br /&gt;
* Experimental Setup:&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
'''ISP Friendly Algorithm'''&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
i.&lt;br /&gt;
&lt;br /&gt;
Insert(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// check if this AS already exists&lt;br /&gt;
If (ASNum does not exist in ASMap)&lt;br /&gt;
  	    ASMap.insert(ASNum);&lt;br /&gt;
&lt;br /&gt;
// now add this peer to that AS&lt;br /&gt;
ASMap[ASNum].peerlist.add(peer);&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ii.&lt;br /&gt;
Delete(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
//remove the peer from that AS&lt;br /&gt;
ASMap[ASNum].peerlist.remove(peer);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
iii.&lt;br /&gt;
Lookup(Peer peer){&lt;br /&gt;
declare RetPeerlist;&lt;br /&gt;
declare retPeerCount;&lt;br /&gt;
&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// Get a list of all ASes that are in the data structure&lt;br /&gt;
ASlist = ASMap.getASList();&lt;br /&gt;
&lt;br /&gt;
// and sort these ASes based on the distance from ASNum&lt;br /&gt;
// to each of the ASes in ASlist, using the matrix&lt;br /&gt;
SortedASlist = ASlist.sort(ASNum , ShortestPathMatrix[][]);&lt;br /&gt;
&lt;br /&gt;
count = 0;&lt;br /&gt;
&lt;br /&gt;
// Noting the number of AS hops&lt;br /&gt;
ashops = 0;&lt;br /&gt;
&lt;br /&gt;
for(i=0;i&amp;lt;SortedASlist.size;i++){&lt;br /&gt;
 	  j=0;&lt;br /&gt;
 	  while(count&amp;lt;retPeerCount &amp;amp;&amp;amp; j&amp;lt;SortedASlist[i].peerlist.size){&lt;br /&gt;
 		Peer tmp =  SortedASlist[i].peerlist[j++];&lt;br /&gt;
      	RetPeerlist.add(tmp);&lt;br /&gt;
&lt;br /&gt;
        // get the AS this peer belongs to&lt;br /&gt;
		  tmpASNum = SortedASlist[i];&lt;br /&gt;
        ashops += ShortestPathMatrix[ASNum][tmpASNum];&lt;br /&gt;
		  count++;&lt;br /&gt;
  }&lt;br /&gt;
        if(count&amp;gt;=retPeerCount)&lt;br /&gt;
              break;&lt;br /&gt;
}&lt;br /&gt;
return RetPeerlist;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1929</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1929"/>
		<updated>2008-04-25T20:25:49Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Proposed Algorithm Simulation==&lt;br /&gt;
* Experimental Setup:&lt;br /&gt;
* First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
**	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
* CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
**	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
* Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
**	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
**	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
***	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
***	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
*	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
**	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
* Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
**	Maxmind IP-Geo database&lt;br /&gt;
*	We test 4 algorithms - AS Order, Geo-location, Random, Network - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
**	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
*	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
'''ISP Friendly Algorithm'''&lt;br /&gt;
Steps:&lt;br /&gt;
*	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
*	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
*	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
i.&lt;br /&gt;
{ &lt;br /&gt;
Insert(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// check if this AS already exists&lt;br /&gt;
If (ASNum does not exist in ASMap)&lt;br /&gt;
  	    ASMap.insert(ASNum);&lt;br /&gt;
&lt;br /&gt;
// now add this peer to that AS&lt;br /&gt;
ASMap[ASNum].peerlist.add(peer);&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ii.&lt;br /&gt;
Delete(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
//remove the peer from that AS&lt;br /&gt;
ASMap[ASNum].peerlist.remove(peer);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
iii.&lt;br /&gt;
Lookup(Peer peer){&lt;br /&gt;
declare RetPeerlist;&lt;br /&gt;
declare retPeerCount;&lt;br /&gt;
&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// Get a list of all ASes that are in the data structure&lt;br /&gt;
ASlist = ASMap.getASList();&lt;br /&gt;
&lt;br /&gt;
// and sort these ASes based on the distance from ASNum&lt;br /&gt;
// to each of the ASes in ASlist, using the matrix&lt;br /&gt;
SortedASlist = ASlist.sort(ASNum , ShortestPathMatrix[][]);&lt;br /&gt;
&lt;br /&gt;
count = 0;&lt;br /&gt;
&lt;br /&gt;
// Noting the number of AS hops&lt;br /&gt;
ashops = 0;&lt;br /&gt;
&lt;br /&gt;
for(i=0;i&amp;lt;SortedASlist.size;i++){&lt;br /&gt;
 	  j=0;&lt;br /&gt;
 	  while(count&amp;lt;retPeerCount &amp;amp;&amp;amp; j&amp;lt;SortedASlist[i].peerlist.size){&lt;br /&gt;
 		Peer tmp =  SortedASlist[i].peerlist[j++];&lt;br /&gt;
      	RetPeerlist.add(tmp);&lt;br /&gt;
&lt;br /&gt;
        // get the AS this peer belongs to&lt;br /&gt;
		  tmpASNum = SortedASlist[i];&lt;br /&gt;
        ashops += ShortestPathMatrix[ASNum][tmpASNum];&lt;br /&gt;
		  count++;&lt;br /&gt;
  }&lt;br /&gt;
        if(count&amp;gt;=retPeerCount)&lt;br /&gt;
              break;&lt;br /&gt;
}&lt;br /&gt;
return RetPeerlist;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1928</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1928"/>
		<updated>2008-04-25T20:22:44Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Proposed Algorithm Simulation==&lt;br /&gt;
Setup:&lt;br /&gt;
1.	First, we get info of all the ASes in the world from Team CYRMU's IP to ASNum Mapping Project using netcat. Using this project, we get all ASes' numbers and their names along with the country they belong to. We filter to derive only CA ASes.&lt;br /&gt;
a.	http://www.team-cymru.org/?sec=8&amp;amp;opt=26 &lt;br /&gt;
2.	CAIDA provides dataset on inter-AS relationships among all ASes. We filter to derive inter-AS relationships within Canada using only CA ASes generated from step 1. This accounts for 375 ASes, 643 links (53 p2p, 5 s2s, 585 c2p).&lt;br /&gt;
a.	http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt &lt;br /&gt;
3.	Using these links and ASes, we generate a shortest policy path matrix which contains the shortest paths between any two ASes using the algorithm discussed in [Mao et al].&lt;br /&gt;
a.	http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf &lt;br /&gt;
b.	To test whether the distances computed are correct, we verified with Rogers BGP site and Teleglobe&lt;br /&gt;
i.	https://supernoc.rogerstelecom.net/ops/ and &lt;br /&gt;
ii.	http://lg.as6453.net/bin/lg.cgi &lt;br /&gt;
4.	AT &amp;amp; T provides IP Range and AS number mapping. We use this dataset to extract only Canadian IP Ranges and corresponding ASes. We generate 84000 number of IPs belonging to these ASes; 10 each from every IP range – AS mapping.&lt;br /&gt;
a.	http://www.research.att.com/~jiawang/as_traceroute/ &lt;br /&gt;
5.	Using IP-Geo database, we identify the latitude and longitude for each of these peers.&lt;br /&gt;
a.	Maxmind IP-Geo database&lt;br /&gt;
6.	We test 3 algorithms - AS Order, Geo-location, Random - on a varied size of set of peers {100, 300, 500, 1000, 3000, 5000, 10000, 15000, 20000} randomly picked from 84000 peers generated in step 4. We run all 3 algorithms on the same set of peers and note insertion, lookup, deletion times as well as AS Hops. We repeat this experiment 5 times.&lt;br /&gt;
a.	For testing of N peers, we take an array of 2N peers. First N peers are used for inserting and deleting. The rest N peers are used for searching. That is, we first insert N peers (from 1 to N), then lookup for next N peers (N+1 to 2N) and then delete N peers (from 1 to N). We note each individual time.&lt;br /&gt;
7.	The simulation has been implemented in Java and the experiments have been performed on a 2GHz and 2GB RAM machine.&lt;br /&gt;
&lt;br /&gt;
ISP Friendly Algorithm&lt;br /&gt;
Steps:&lt;br /&gt;
1.	We populate IP to AS Map (IPASMap) using the IP Range and AS mapping present in AT &amp;amp; T dataset. This map can be used for searching an AS for a given IP.&lt;br /&gt;
&lt;br /&gt;
2.	We then load the pre-computed shortest policy path matrix (ShortestPathMatrix) which contains the shortest distance between every 2 individual ASes.&lt;br /&gt;
&lt;br /&gt;
3.	The data structure we use is a HashMap (ASMap) which contains ASNum as the key and the peerlist (the list of peers in that AS) corresponding to that AS.&lt;br /&gt;
&lt;br /&gt;
i. &lt;br /&gt;
Insert(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// check if this AS already exists&lt;br /&gt;
If (ASNum does not exist in ASMap)&lt;br /&gt;
  	    ASMap.insert(ASNum);&lt;br /&gt;
&lt;br /&gt;
// now add this peer to that AS&lt;br /&gt;
ASMap[ASNum].peerlist.add(peer);&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ii.&lt;br /&gt;
Delete(Peer peer){&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
//remove the peer from that AS&lt;br /&gt;
ASMap[ASNum].peerlist.remove(peer);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
iii.&lt;br /&gt;
Lookup(Peer peer){&lt;br /&gt;
declare RetPeerlist;&lt;br /&gt;
declare retPeerCount;&lt;br /&gt;
&lt;br /&gt;
//obtain the AS this peer belongs to&lt;br /&gt;
ASNum = IPASMap.getASNum(peer.IP);&lt;br /&gt;
&lt;br /&gt;
// Get a list of all ASes that are in the data structure&lt;br /&gt;
ASlist = ASMap.getASList();&lt;br /&gt;
&lt;br /&gt;
// and sort these ASes based on the distance from ASNum&lt;br /&gt;
// to each of the ASes in ASlist, using the matrix&lt;br /&gt;
SortedASlist = ASlist.sort(ASNum , ShortestPathMatrix[][]);&lt;br /&gt;
&lt;br /&gt;
count = 0;&lt;br /&gt;
&lt;br /&gt;
// Noting the number of AS hops&lt;br /&gt;
ashops = 0;&lt;br /&gt;
&lt;br /&gt;
for(i=0;i&amp;lt;SortedASlist.size;i++){&lt;br /&gt;
 	  j=0;&lt;br /&gt;
 	  while(count&amp;lt;retPeerCount &amp;amp;&amp;amp; j&amp;lt;SortedASlist[i].peerlist.size){&lt;br /&gt;
 		Peer tmp =  SortedASlist[i].peerlist[j++];&lt;br /&gt;
      	RetPeerlist.add(tmp);&lt;br /&gt;
&lt;br /&gt;
        // get the AS this peer belongs to&lt;br /&gt;
		  tmpASNum = SortedASlist[i];&lt;br /&gt;
        ashops += ShortestPathMatrix[ASNum][tmpASNum];&lt;br /&gt;
		  count++;&lt;br /&gt;
  }&lt;br /&gt;
        if(count&amp;gt;=retPeerCount)&lt;br /&gt;
              break;&lt;br /&gt;
}&lt;br /&gt;
return RetPeerlist;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1927</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1927"/>
		<updated>2008-04-25T20:17:48Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Algorithm ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* IP to ASNum Mapping project from Team CYRMU&lt;br /&gt;
** http://www.team-cymru.org/?sec=8&amp;amp;opt=26&lt;br /&gt;
* CAIDA's Inter-AS relationships&lt;br /&gt;
** http://as-rank.caida.org/data/2008/as-rel.20080414.a0.01000.txt&lt;br /&gt;
* On AS-level Path Inference&lt;br /&gt;
** http://www.eecs.umich.edu/~zmao/Papers/routescope.pdf&lt;br /&gt;
* AT &amp;amp; T IP-Range and ASNum mapping&lt;br /&gt;
** a.	http://www.research.att.com/~jiawang/as_traceroute/&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1926</id>
		<title>Private:pCDN:Peer Matching</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=Private:pCDN:Peer_Matching&amp;diff=1926"/>
		<updated>2008-04-25T20:13:00Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;some intro &lt;br /&gt;
&lt;br /&gt;
== Proposed Algorithm ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ideas ==&lt;br /&gt;
&lt;br /&gt;
* Verification of AS Shortest Paths&lt;br /&gt;
&lt;br /&gt;
* Experimental Setup&lt;br /&gt;
** # Peers: {100, 300, 500, 1000, 3000, 5000, 8000}&lt;br /&gt;
** # Senders: {2, 5, 10, 15, 20}&lt;br /&gt;
** # Sessions: 1000&lt;br /&gt;
** Algos: Random, Network, Geo, AS_Order(ISP-Friendly)&lt;br /&gt;
** Metrics: Min, Avg, Max, Output&lt;br /&gt;
** Output: Hops (Aggregate, P2P, C2P)&lt;br /&gt;
&lt;br /&gt;
* Search Efficiency:&lt;br /&gt;
** If multiple shortest paths are existing, choose the least &amp;quot;cost&amp;quot; (s2s=0, p2p=1, c2p=2)&lt;br /&gt;
&lt;br /&gt;
* Check practicality&lt;br /&gt;
** What kind of data we need initially and periodically&lt;br /&gt;
** How often we need them&lt;br /&gt;
** We implement the algorithm to determine AS relationships&lt;br /&gt;
&lt;br /&gt;
* Large graphs&lt;br /&gt;
** Can we get the US &amp;amp; CA graph?&lt;br /&gt;
** Can we optimize this matrix.&lt;br /&gt;
&lt;br /&gt;
* Usage of ISP + Network / Geolocation Algorithms&lt;br /&gt;
** We reduce the number of IP hops&lt;br /&gt;
&lt;br /&gt;
* Diameter of the graph&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
* links to various resources&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1665</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1665"/>
		<updated>2008-03-11T19:03:48Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* '''Q:''' How to import new content into the database?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
*** Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
*** Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
*** Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab. Also, for every new group that is created, it can be seen in &amp;quot;Groups&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' How to update/delete groups?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Update a group: Go to Groups tab and double click the desired group whose policies you would like to update. It opens up a Policy window and make respective changes and press &amp;quot;Save&amp;quot; to save the new settings for that group or by pressing &amp;quot;Cancel&amp;quot;, your changes are not committed and hence the group's policies remain the same as before.&lt;br /&gt;
*** Delete a group: Go to Groups tab and select a desired group and press DEL key to delete the record. Note that this deletes all the files that belong to this group. You can see  corresponding changes both in Media Files tab's table as well as Groups tab's table.&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' How to update/delete a file?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Update a file: A file can be moved from one group to another, by selecting corresponding row in Media Files tab and from the third column's combobox, selecting the target group to which this file is desired to be moved.&lt;br /&gt;
*** Delete a file: Select the desired file and press DEL key to delete the file.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1664</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1664"/>
		<updated>2008-03-11T19:02:08Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* '''Q:''' How to import new content into the database?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
*** Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
*** Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
*** Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab. Also, for every new group that is created, it can be seen in &amp;quot;Groups&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' How to update/delete groups?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Update a group: Go to Groups tab and double click the desired group whose policies you would like to update. It opens up a Policy window and make respective changes and press &amp;quot;Save&amp;quot; to save the new settings for that group or by pressing &amp;quot;Cancel&amp;quot;, your changes are not committed and hence the group's policies remain the same as before.&lt;br /&gt;
*** Delete a group: Go to Groups tab and select a desired group and press DEL key to delete the record. Note that this deletes all the files that belong to this group. You can see  corresponding changes both in Media Files tab's table as well as Groups tab's table.&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' How to update/delete a file?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** '''Update a file:''' A file can be moved from one group to another, by selecting corresponding row in Media Files tab and from the third column's combobox, selecting the target group to which this file is desired to be moved.&lt;br /&gt;
*** '''Delete a file:''' Select the desired file and press DEL key to delete the file.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1663</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1663"/>
		<updated>2008-03-11T19:01:53Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* '''Q:''' How to import new content into the database?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
*** Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
*** Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
*** Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab. Also, for every new group that is created, it can be seen in &amp;quot;Groups&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' How to update/delete groups?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** Update a group: Go to Groups tab and double click the desired group whose policies you would like to update. It opens up a Policy window and make respective changes and press &amp;quot;Save&amp;quot; to save the new settings for that group or by pressing &amp;quot;Cancel&amp;quot;, your changes are not committed and hence the group's policies remain the same as before.&lt;br /&gt;
*** Delete a group: Go to Groups tab and select a desired group and press DEL key to delete the record. Note that this deletes all the files that belong to this group. You can see  corresponding changes both in Media Files tab's table as well as Groups tab's table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' How to update/delete a file?&lt;br /&gt;
** '''A:''' &lt;br /&gt;
*** '''Update a file:''' A file can be moved from one group to another, by selecting corresponding row in Media Files tab and from the third column's combobox, selecting the target group to which this file is desired to be moved.&lt;br /&gt;
*** '''Delete a file:''' Select the desired file and press DEL key to delete the file.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1662</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1662"/>
		<updated>2008-03-11T18:50:26Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* &amp;quot;&amp;quot;Q:&amp;quot;&amp;quot; How to import new content into the database?&lt;br /&gt;
** &amp;quot;&amp;quot;A:&amp;quot;&amp;quot; &lt;br /&gt;
*** Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
*** Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
*** Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
*** Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab. Also, for every new group that is created, it can be seen in &amp;quot;Groups&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;&amp;quot;Q:&amp;quot;&amp;quot; How to update/delete groups?&lt;br /&gt;
** &amp;quot;&amp;quot;A:&amp;quot;&amp;quot; &lt;br /&gt;
*** Update a group: Go to Groups tab and double click the desired group whose policies you would like to update. It opens up a Policy window and make respective changes and press &amp;quot;Save&amp;quot; to save the new settings for that group or by pressing &amp;quot;Cancel&amp;quot;, your changes are not committed and hence the group's policies remain the same as before.&lt;br /&gt;
*** Delete a group: Go to Groups tab and select a desired group and press DEL key to delete the record. Note that this deletes all the files that belong to this group. You can see  corresponding changes both in Media Files tab's table as well as Groups tab's table.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1661</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1661"/>
		<updated>2008-03-11T18:41:05Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* &amp;quot;&amp;quot;Q:&amp;quot;&amp;quot; How to import new content into the database?&lt;br /&gt;
** &amp;quot;&amp;quot;A:&amp;quot;&amp;quot; &lt;br /&gt;
*** Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
*** Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
*** Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
*** Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab. Also, for every new group that is created, it can be seen in &amp;quot;Groups&amp;quot; tab.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1660</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1660"/>
		<updated>2008-03-11T18:40:00Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* &amp;quot;&amp;quot;Q:&amp;quot;&amp;quot; How to import new content into the database?&lt;br /&gt;
** &amp;quot;&amp;quot;A:&amp;quot;&amp;quot; &lt;br /&gt;
*** Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
*** Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
*** Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
*** Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1659</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1659"/>
		<updated>2008-03-11T18:39:29Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* &amp;quot;&amp;quot;Q:&amp;quot;&amp;quot; How to import new content into the database?&lt;br /&gt;
** &amp;quot;&amp;quot;A:&amp;quot;&amp;quot; &lt;br /&gt;
Step 1: Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. &lt;br /&gt;
Step 2: &amp;quot;Server Manager&amp;quot; window is opened, if authenticated. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. &lt;br /&gt;
Step 3: Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. &lt;br /&gt;
Step 4: From menubar, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group from the combobox and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1658</id>
		<title>pCDN:Faq</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Faq&amp;diff=1658"/>
		<updated>2008-03-11T18:37:26Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frequently Asked Questions for Users =&lt;br /&gt;
&lt;br /&gt;
* '''Q:'''  Does Sun Java 1.5.09 work with the pCDN client?&lt;br /&gt;
** '''A:''' No, Java runtime 1.6 is required by the pCDN client.&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Developers =&lt;br /&gt;
&lt;br /&gt;
* '''Q:''' What is the CBC's Intranet topology?&lt;br /&gt;
** '''A:''' The internal CBC network may contain public (but not allocated to CBC (1.0.0.0/8) addresses and private addresses, but there is no NAT between them. The server is presently on a 1.0.0.0/8 address and there is no firewall between it and the rest of the users inside CBC. The server is connected to a central router/switch where all the other subnets in the same city connect. Intercity links interconnect those router/switches in all other locations.  Addresses are allocated in blocks to different cities. Here is Montreal we have 1.176.0.0/16 and in Toronto they have 1.151.0.0/16 and it goes on like this.  I would have to check be I believe you can assume that if two addresses are in the same /16, that they are in the same building and or same city with high speed links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Frequently Asked Questions for Administrators =&lt;br /&gt;
* &amp;quot;&amp;quot;Q:&amp;quot;&amp;quot; How to import new content into the database?&lt;br /&gt;
** &amp;quot;&amp;quot;A:&amp;quot;&amp;quot; Open Admin application by going to Start &amp;gt; All Programs &amp;gt; pCDN Server &amp;gt; pCDN Admin.  Press &amp;quot;Connect&amp;quot; button in the Login screen that is shown. It opens &amp;quot;Server Manager&amp;quot; window. Click on Manage on the menubar and choose Content Manager &amp;gt; New Policy. Enter a group-name and if desired to impose restrictions on content access, configure the settings (Subnet, Geo-location and time period) for this group. Then press &amp;quot;Save&amp;quot; to save the group's parameters into the database. Then, go to Manage &amp;gt; Content Manager &amp;gt; Open. Go to &amp;quot;Import Content&amp;quot; tab. Select the desired group and enter a feed URL along with the output feed filename. Press &amp;quot;Import and Convert&amp;quot; to import all the files in this feed file into the database. To import non-feed URL into the database, enter a non-feed URL in the corresponding section and press &amp;quot;Import&amp;quot; button. You can see the list of files that are imported in the &amp;quot;Media Files&amp;quot; tab.&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1578</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1578"/>
		<updated>2008-03-04T20:55:31Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*P2 download F1 from P1.&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead, or when the internet connection of the peer is cut:&lt;br /&gt;
#*The peer keeps re-trying to connect. When some tracker came up, or when the peer was re-connected to the Internet, the connects to a tracker.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
# New Policy:&lt;br /&gt;
#* Fill all respective fields in the Policy Window and press &amp;quot;Save&amp;quot; Button. Check if the database's Policy table is filled with a new entry that has just been inserted. Also, note that the group is inserted in &amp;quot;Groups&amp;quot; Tab. In addition, the combo box in the 'Content Importer' tab is also updated with the new entry. The combo box in the third column of 'Media Files' tab's table is also updated.&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the 'users' table in database and also in the table shown in the UI.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
# Monitor: If a group / file is added / edited / deleted using one application, it is not updated on all the other running monitoring applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1577</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1577"/>
		<updated>2008-03-04T20:54:15Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*P2 download F1 from P1.&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead, or when the internet connection of the peer is cut:&lt;br /&gt;
#*The peer keeps re-trying to connect. When some tracker came up, or when the peer was re-connected to the Internet, the connects to a tracker.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
# New Policy:&lt;br /&gt;
#* Fill all respective fields in the Policy Window and press &amp;quot;Save&amp;quot; Button. Check if the database's Policy table is filled with a new entry that has just been inserted. Also, note that the group is inserted in &amp;quot;Groups&amp;quot; Tab. In addition, the combo box in the 'Content Importer' tab is also updated with the new entry. The combo box in the third column of 'Media Files' tab's table is also updated.&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the 'users' table in database and also in the table shown in the UI.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
# Monitor: If a group / file is added / edited / deleted using Content Manager, it is not updated on all the other running monitoring applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1575</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1575"/>
		<updated>2008-03-04T20:52:58Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*P2 download F1 from P1.&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead, or when the internet connection of the peer is cut:&lt;br /&gt;
#*The peer keeps re-trying to connect. When some tracker came up, or when the peer was re-connected to the Internet, the connects to a tracker.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
# New Policy:&lt;br /&gt;
#* Fill all respective fields in the Policy Window and press &amp;quot;Save&amp;quot; Button. Check if the database's Policy table is filled with a new entry that has just been inserted. Also, note that the group is inserted in &amp;quot;Groups&amp;quot; Tab. In addition, the combo box in the 'Content Importer' tab is also updated with the new entry. The combo box in the third column of 'Media Files' tab's table is also updated.&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
# Monitor: If a group / file is added / edited / deleted using Content Manager, it is not updated on all the other running monitoring applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1573</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1573"/>
		<updated>2008-03-04T20:37:26Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
# New Policy:&lt;br /&gt;
#* Fill all respective fields in the Policy Window and press &amp;quot;Save&amp;quot; Button. Check if the database's Policy table is filled with a new entry that has just been inserted. Also, note that the group is inserted in &amp;quot;Groups&amp;quot; Tab. In addition, the combo box in the 'Content Importer' tab is also updated with the new entry. The combo box in the third column of 'Media Files' tab's table is also updated.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
# Monitor: If a group / file is added / edited / deleted using Content Manager, it is not updated on all the other running monitoring applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1572</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1572"/>
		<updated>2008-03-04T20:33:30Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
# New Policy:&lt;br /&gt;
#* Fill all respective fields in the Policy Window and press &amp;quot;Save&amp;quot; Button. Check if the database's Policy table is filled with a new entry that has just been inserted. Also, note that the group is inserted in &amp;quot;Groups&amp;quot; Tab. In addition, the combo box in the 'Content Importer' tab is also updated with the new entry. The combo box in the third column of 'Media Files' tab's table is also updated.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1571</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1571"/>
		<updated>2008-03-04T20:17:46Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1570</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1570"/>
		<updated>2008-03-04T20:16:12Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
# When backup file is transferred to the tracker (that we are currently connected to) and restored, the entire 'content' tab and 'connected peers' tab info is cleared and populated with the latest one. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
# Groups Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected group is deleted and also all the files corresponding to the group from both 'media' table and 'groups' table (which contains the pair fileid-groupid). Also, note that this selected row is deleted from the table in the UI and also all the files corresponding to the group are deleted from the 'Media Files' tab's table. In addition, the combo box corresponding to 'groupname' in 'Content Importer' tab, is updated respectively, i.e., the deleted group is not present in the combo box list.&lt;br /&gt;
#* Select a row and double-click it. Check that it opens up a Policy Window in which the group's policy is displayed in respective fields correctly. After editing, when 'Save' button is pressed, the corresponding group info is updated in the database and also is reflected in the 'Groups Tab'. We must also be able see the difference being reflected in the combo box of the 'Content Importer' tab and also the 'Media Files' tab, especially in the case of name of group being altered.&lt;br /&gt;
# Media Files Tab:&lt;br /&gt;
#* Select a row and press DEL key. Check from the database that the selected file is deleted from the 'media' table and also groups table, the pair groupid-fileid is to be deleted. Also, the file is removed from the table in the UI.&lt;br /&gt;
#* Select a row and change the group of a particular file by selecting any other group from the combo box in the third column of the table. That particular change should not only be reflected in the UI but also the database must be updated with the newer group. Check the 'groups' database table to see if groupid corresponding the fileid has changed to the newer one. &lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1568</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1568"/>
		<updated>2008-03-04T19:43:19Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' increase by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The country the peer belongs to is inserted into the left table if the country is not already there in the left table. When that country is clicked, all the peers that belong to that country along with city and region details are shown in the right table. Also, note that the number of peers respective to the country also increment&lt;br /&gt;
#* Content tab: If that peer has files, these files are inserted into the left table (if these are not already there) and the peer info is added to the corresponding file. When one clicks on a particular file, all those peers containing that file are shown in the right table.&lt;br /&gt;
# When a peer leaves the network, there are (max) 3 updates that happen in the UI:&lt;br /&gt;
#* Bottom of the screen: '#Peers' decrease by 1 after MainData Update interval, which is set at the time of Login.&lt;br /&gt;
#* Connected Peers tab: The peer is deleted from the respective country's right table and if the corresponding country has no peers, the country will be deleted from the left table. Also, note that the number of peers respective to the country also decrement.&lt;br /&gt;
#* Content tab: This particular peer's record is coloured grey in the right table for every file (present in the left table) it contains.&lt;br /&gt;
# When a peer downloads a file, 'Content' tab is updated by inserting file info in the left table if it is not already there and then to the corresponding file's right table, this peer info is added.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
=== Geo-Fencing ===&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1567</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1567"/>
		<updated>2008-03-04T19:02:49Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, &lt;br /&gt;
&lt;br /&gt;
=== Geo - Fencing ===&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing &amp;quot;Import and Convert&amp;quot; button with atleast any one of the Feed-URL fields unfilled/empty, a dialog box is shown with a message that both the fields are to be filled.&lt;br /&gt;
# Try pressing &amp;quot;Import&amp;quot; button with non-feed URL field unfilled/empty, a dialog box is shown with a message that the field is to be filled.&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
	<entry>
		<id>https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1566</id>
		<title>pCDN:Testplan</title>
		<link rel="alternate" type="text/html" href="https://nmsl.cs.sfu.ca/index.php?title=pCDN:Testplan&amp;diff=1566"/>
		<updated>2008-03-04T18:59:15Z</updated>

		<summary type="html">&lt;p&gt;Nitin.chiluka: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Client ==&lt;br /&gt;
# Download several new podcasts from the web servers (make sure the media/ directory contains no media files before this test). &lt;br /&gt;
# Download new podcasts from several pCDN senders.&lt;br /&gt;
# Download podcasts that were downloaded before (and still in media/ directory).&lt;br /&gt;
# Reduce the harddisk quota using user interface to 50 MB, and download several podcasts.&lt;br /&gt;
# Reduce the  memory quota to 4 MB, and download several podacsts.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from one of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cables from all of the senders.&lt;br /&gt;
# Download new podcasts from several senders and unplug the Ethernet cable from the receiver. &lt;br /&gt;
# Manually truncate the media files in media/ directory, and download that file from the podcast client.&lt;br /&gt;
# Manually truncate the media files in media/ directory, and request that file from another pCDN client.&lt;br /&gt;
# Download several new podcasts using a pCDN client behind a NAT box. Check whether the IP/port are correctly reported at the pCDN server. Also try to use this pCDN client as the sender.&lt;br /&gt;
# Download several new podcasts using a pCDN client on a machine with multiple IPs. Use user-interface to select an outgoing IP. Then, change the NICs' IP addresses, and see whether the pCDN client continues work after reboots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Server ==&lt;br /&gt;
Note: In case a tracker gets disconnected, re-connection suffices (e.g., re-plugging the cable if the cable was unplugged). There is no need to re-start the tracker.&lt;br /&gt;
&lt;br /&gt;
# When trackers are up and connected to each other:&lt;br /&gt;
#*peers download files from each other as expected.&lt;br /&gt;
# Have a peer download podcasts, and then bring the current primary tracker down (or unplug its network cable):&lt;br /&gt;
#*No problem. The peer communicates with the next tracker after a few seconds.&lt;br /&gt;
#(i) Have a peer join the network when some trackers are down, or (ii) After the peer joined the network (primary tracker is T1 for example), bring T1 down and have the peer request some files:&lt;br /&gt;
#*No problem. Peer connects to the next tracker to join/request.&lt;br /&gt;
#Re-plug the network cable of a tracker that was the primary and has been disconnected from the network since we unplugged its cable:&lt;br /&gt;
#*The trackers gets working properly, becomes primary, and receives the most recent database file from the tracker who was the primary during the absence (cable-unplugged) period.&lt;br /&gt;
#A tracker T1, who is prior to T2 for being the primary, has been down/disconnected and is just started/re-connected. When a peer (who is now connected to T2) requests to download a file:&lt;br /&gt;
#*T2 leads the peer to T1 and the peer immediately connects to T1 to ask where to download the file from.&lt;br /&gt;
#Have a peer P1 download a file F1, wait until the backuping moment (when the primary tracker T1 pushes his recent database to others), and bring the primary tracker down after that:&lt;br /&gt;
#*Tracker T2 becomes the primary and P2 downloads file F1 from peer P1 (not from the web server).&lt;br /&gt;
#Tracker T1 was the primary, but is down/disconnected now and tracker T2 is the current primary. Peer P1 downloads file F1. T1 comes back and becomes the primary, so T2 immediately pushes the most recent database to T1. Then, peer P2 requests to download F1:&lt;br /&gt;
#*If T1 was brought down by Ctrl+C and is started again now: P2 download F1 from P1.&lt;br /&gt;
#*If T1 was disconnected by unplugging its network cable: when P2 now requests the file, the balloon “Sorry, you are not permitted to download the file” is shown (XXX).&lt;br /&gt;
#When a peer is connected to the network and suddenly all trackers get dead:&lt;br /&gt;
#*The peer keeps re-trying to connect. Thus when some tracker came up, the peer connects to it.&lt;br /&gt;
#Run trackers T1 and T2, where T1 is the primary, and a peer is connected to the network. Stop T1, so T2 becomes the primary, and the peers connects to T2 after a short time. Start T1 again. Now, for a few seconds T2 still thinks he is the primary. Immediately unplug T2’s cable.&lt;br /&gt;
#*T1 knows himself as the primary and the peer downloads files through T1.&lt;br /&gt;
#*T2 after some seconds knows that he is the primary. The peer is able to download files through T2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Admin Tools ==&lt;br /&gt;
=== Server Statistics === &lt;br /&gt;
# When a peer joins the network, &lt;br /&gt;
&lt;br /&gt;
=== Geo - Fencing ===&lt;br /&gt;
&lt;br /&gt;
=== Content Importer ===&lt;br /&gt;
# Select a group from the combo box and enter a feed URL and output feed filename and press &amp;quot;Import and Convert&amp;quot; button. &lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database. Check in the database's 'Media' table that these files are imported and also 'Groups' table being populated with the corresponding fileid and groupid information. Also, these files are inserted into the 'Media Files' tab's table pointing to respective group. Check the output file is created with http://localhost being inserted for every URL within that feed file.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Select a group from the combo box and enter a non-feed URL and press &amp;quot;Import&amp;quot;.&lt;br /&gt;
#* Success: A dialog pops up saying that these files are imported into the database along with output URL. Check in the database's 'Media' table that this file is imported and also 'Groups' table being populated with the corresponding fileid and groupid information.&lt;br /&gt;
#* Failure: A dialog pops up saying that these files could not be imported into the database. Check if database is the same as before.&lt;br /&gt;
# Try pressing either of the 2 buttons with atleast any one of the corresponding fields unfilled/empty, a dialog box is shown with a message that those corresponding fields are to be filled.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User Administration ===&lt;br /&gt;
# Add a new user and check if it is inserted into the database and also in the table shown below.&lt;br /&gt;
# Change the privilege level of a user using the combo box in the table and check if it is reflected in the database.&lt;br /&gt;
# Select a row in the table and press DEL key. The row must get deleted from the database and also the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Known Issues ==&lt;br /&gt;
# Server: the keep-alive interval must not be smaller than the time it takes to transmit the database file from one tracker to another. For example, wil a value of &amp;quot;1 minute&amp;quot; for that interval, if the size of the database file gets in orders of some hundred megabytes or some gigabytes (i.e., some millions of users), that interval must be increased to &amp;quot;10 minutes&amp;quot; or so.&lt;br /&gt;
# Client: The ''preferences'' frame silently rejects settings that are out of supported ranges. For example, max memory usage below 4 MB will be rejected without giving any warning. Due to the limitation of Java class and display space, we plan to address this in the new GUI design.&lt;br /&gt;
# Client: When a receiver loses its Internet connection, the pCDN client does not return a proper error code to podcast clients, such as iTunes. Instead, we wait until iTunes times out.&lt;br /&gt;
# Server: Server would crash (XXX need to validate this), if the RDBMS goes off-line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The following are also posted to Bugzilla (can be removed from here, but not suggested):&lt;br /&gt;
&lt;br /&gt;
#Client: When we stop downloading a file in iTunes, the pCDN client still continues downloading.&lt;br /&gt;
#*A user might have stopped the download in order to save his bandwidth for some other applications, but the pCDN client prevents that saving.&lt;br /&gt;
#Server: Each tracker has a file settings.ini, which must be the same for all trackers. Having this file in a centralized manner, such as on a file server, may be helpful, because modification of that file on some tracker may cause improper functioning of the entire system. &lt;br /&gt;
#The directory containing media files a peer has stored gets larger and larger by time. There should be a mechanism to delete unnecessary ones. [Cheng 08/03/01, There is a max disk usage setting in client GUI, which will remove the least recently received files. Note that, however, the recency is not persistent, so the received time of each files got lost after rebooting pCDN clients.]&lt;/div&gt;</summary>
		<author><name>Nitin.chiluka</name></author>
	</entry>
</feed>