Difference between revisions of "Private:psim"
From NMSL
| Line 111: | Line 111: | ||
float portionEncoding; // a given peer peform encoding among this portion of encoded blocks out of the total coded | float portionEncoding; // a given peer peform encoding among this portion of encoded blocks out of the total coded | ||
blocks that has been received so far | blocks that has been received so far | ||
| − | // blocks for encoding will be chosen randomly out of all received encoded blocks | + | // blocks for encoding will be chosen randomly out of all received encoded blocks [in case of |
| + | network coding] | ||
} | } | ||
Revision as of 16:01, 10 March 2009
This page documents the development of a discrete event simulator for P2P video streaming applications. This simulator captures important features of data-driven video streaming systems. In particular, it is designed to evaluate: (i) the performance of various segment scheduling algorithms; (ii) the potential of network coding in multi-layer P2P video streaming systems.
Class Diagrams
- use long for time/offset in msec, which has a rollover time 24.85 days
class Layer { // video layer, we divide each layer into fixed size blocks, we keep track of
// block size and no blocks here
int layer; // layer number it belongs to, set to zero for nonscalable frame
int blockSize; // blocks are fixed size, in bytes
int noBlocks; // how many block here
int dataSize; // aggregate size of all blocks
}
class Frame { // video frame, read from trace file
int no; // serial number
long deadlineOffset; // deadline offset compared to the start of the video, in msec
int totalSize; // total frame size in bytes, including all layers
}
class Segment { // packetized frames
int no; // serial number
Vector<Frame> frames; // reference to included frame
int totalSize; // aggregate size in bytes
Hashtable<Integer, Layer> layers; // reference to layers of this frame, key is the layer id
long getDeadline(); // returns the worst case deadline of this segment (first frame?)
}
class Video { // a media file, shared by a group of peers
String filename; // video trace file path and name
Hashtable<int, Frame> frameTrace; // frames read from the trace file
Hashtable<int, Segment> segmentTrace; // segments generated by the prepareSegments(...)
void prepareSegments(int noFrame); // packetize frames into segment,
// noFrame indicates how many frames should we put in one segment;
// we might implement other packtization schemes later.
}
Interface Node { // a network node, currently can be either a tracker or a peer
}
class Tracker implement Node { // holds a set of groups/videos
Vector<Group> groups; // all the video groups
void addPeer(Group group, Peer peer); // insert a peer into a video group
}
class Peer implements Node { // represent a running peer
int id; // sequential peer id
int ingressBW; // incoming bandwidth in bps
int egressBW; // outgoing bandwidth in bps
Hashtable<Video, Vector<Connection>> senders; // peers that we may send requests to
Hashtable<Video, Vector<Connection>> receivers; // peers that we may send data to
Connection tracker; // control connection to the tracker
Hashtable<Video, VideoBufMap> vBufMaps; // points to the video-level buffer map
void join(Video vide); // the peer joins a group
void leave(Video video); //the peer leaves the group
//here we should call a scheduling algorithm to do the scheduling
void start(Video video); // start downloading a video
void stop(Video video); // stop downloading
}
class Group { // peers that have downloaded or want to download a Video
Video video; // media shared among peers
Vector<Peer> peers; // peers in this group
Vector<Peer> join(Peer peer); // adding a new peer into this group, and return a list of senders
void leave(Peer peer); // removing a peer from this group
}
class Connection { // end to end network link between two peers
Peer peer1; // one end
Peer peer2; // the other
long delay; // transmission delay in msec
int e2eBW; // end-to-end bandwidth in bps
int estimateRate; // estimated rate (maybe historical)
}
class VideoBufMap {
Hashtable<Integer, SegBufMap> sBufMaps; // points to segment bufmap
}
class SegBufMap {
Hashtable<Integer, LayerBufMap> lBufMaps; // points to layer bufmap
}
class LayerBufMap {
boolean avail; // availability bit for this (video, segment, layer) tuple
NCBuf ncBuf; // network coding structure
}
class NCBuf {
boolean servCapibility; // Serving capability of each segment
// show whether this segment is capable of serving other peers(applicable in NC)
int noCodedBl; // No. of encoded blocks for this (video,segment,layer)
BufferedMatrix matrix; // coefficients and their corresponding encoded blocks up to now
}
class BufferedMatrix{ // save coefficients and their corresponding encoded blocks up to now
Vector<Vector<Integer>> coef; // coefficients of the corresponding encoded blocks
Vector <Integer>encodedValue; // encoded blocks
Vector<Integer> decode(); // perform decoding process and return the original blocks of a segment
void encode(); // perform encoding process on blocks of a segment
}
class SysParam{ // system parameters that can be changed in each iteration of the simulation
float portionServ; // after receiving this portion of coded blocks out of total number of blocks, the segment is capable
of serving to others [in case of network coding]
long skipSec; // time duration which is skipped from the current playback point for a new joined peer [in case of
network coding]
float portionEncoding; // a given peer peform encoding among this portion of encoded blocks out of the total coded
blocks that has been received so far
// blocks for encoding will be chosen randomly out of all received encoded blocks [in case of
network coding]
}
// XXX TODO XXX
class SlidingWindow{ // the sliding window which contains all the segments that the receiver is interested currently
// set it to a fixed size, for example 1 minute of videos
// the receiver first inform all its neighbors which part of the video it is interested currently,
// then its neighbors will send their bitmap of that part to the receiver for scheduling
int startOffset; //the offset of the starting segment compared with the start of the video
int endOffset; // the offset of the ending segment compared with the start of the video
double slidingWindowSize(); //get the size of the sliding window
}
class exchangeWindow{ // the exchange window is the front part of the sliding window.
//The unavailable segments in exchange window will be requested by the peer
//usually we set it to a fixed size, for example, 10 seconds of video
// the field of this class is similar to that of the SlidingWindow class
int startOffset;
int endOffset;
void move(); // move the exchange window forward afte a scheduling period and do a next scheduling
double getSlidingWindowSize()
}
interface Scheduler{ // any class wants to do scheduling should implement this interface
void schedule() throws simException; // the scheduling period is specified in the specific scheduling algorithm class
}
class resultItem{ //this class records a scheduling item
int peerID;
int segID;
long startTime; //start time of the peer to transmit this segment
}
class SchedulingResult{ // this class is used to record scheduling results and do some statistics,
// for example, calculate the average delivery rate
Vector<resultItem> item; // result item: <peerID, segID, startTime> (we may write this vector to a log file later)
long runningTime; //running time of the algorithm
long deliveryRatio; //Average delivery ratio
double getDeliveryRatio(); //get the actual average delivery ratio
double getRunningTime(); //get the running time
}
class SEFF implements Scheduler{ // the Serial Earliest Finish-time First scheduling algorithm
SchedulingResult result; // create a schedulingResult object to store the result
void schedule(); //implement the Scheduler interface
void writeResult(); //write scheduling result to the result object of one scheduling period
//including calculating the average delivery ratio and running time
}
class LRF implements Scheduler{ // the Local Rarest First algorithm
SchedulingResult result; // create a schedulingResult object to store the result
void schedule(); //implement the Scheduler interface
void writeResult(); //write scheduling result to the result object of one scheduling period
//including calculating the average delivery ratio and running time
}
class RndGen { // common random number/event generators, to deal with
// 1. event: add new receiver for a group/video
// 2. event: remove a receiver from a group/video
// 3. number: peer's bandwidth
}
class SimException {
}
Interface EventHandler {
void handle() throws SimException; // callback function
}
class Event {
long time; // when this event happens
Node src; // which peer generates this event
Node dst; // whom is destinate of this event
Vector<EventHandler> handlers; //process an event and update states accordingly
int type; // enum, see below
// 1. connectSent: peer 1 makes a connection to peer 2, and add each other into neighbors
// 2. connectArrived: the connect message is received by the peer 2
// 3. acceptSent: the potential sender reply with an accept message
// 4. acceptArrived: the accept message arrives at the connection originator
// 5. requestSent: a receiver sends a high-priority request message to a sender
// 6. requestArrived: a request message gets to a sender
// 7. dataSent: a sender sends a data segment
// 8. dataArrived: a data segment arrives to a receiver
// 9. joinSent: a join message to the tracker
// 10. joinArrived: a join message reaches the tracker
// 11. responseSent: a response message sent by the tracker
// 12. responseArrived: a response message reaches the receiver
// 13. scheduleStarted: a scheduling period started.
//14. bimapSyncSent: the receiver inform all its neighbors what part of the video it is interested currently
in order to get a small bitmap from them to do scheduleing, this message is sent
when the receiver: jumps to view another part of the video or a new neighbor joins.
//15. bitmapSyncArrived: the sender receives the bitmap Synchronization information
//commet: how should we do when a scheduling period expires?
//There are two situations: 1. all scheduled segments arrived; 2. some segments are still in transmission.
void addHandler(Interface EventHandler); // add an additional event handler
void rmvHandler(Interface EventHandler); // remove an additional event handler
}
class EventQueue {
SortedMap<Long, Event> queue; // events sorted on its time
void dispatch(Event event); // sequentially invoke even handlers
}
class Simulator {
long simTime; // elapsed simulation time in msec
EventQueue events; // pending events that will be processed
void dispatchEvent(); // forever loop to process the next event
}
Some comments:
*
Performance metrics for scheduling
Here are some performance metrics:
1. Average delivery ratio: number of on-time scheduled segments over total number of segments to be scheduled 2. Load balance among senders 3. Initial buffering time 4. Time and space complexity of the scheduling algorithm
