Showing posts with label protocols. Show all posts
Showing posts with label protocols. Show all posts

Friday, May 2, 2014

NFV & SDN part I

In their eternal quest to reduce CAPEX, mobile network operators have been egging on telecom infrastructure manufacturers to adopt more open, cost effective computing capabilities.

You will remember close to 15 years ago when all telecom platforms had to be delivered on hardened SUN Solaris SPARC NEBS certified with full fledged Oracle database to be "telecom grade". Little by little, x86 platforms, MySQL databases and Linux OS have penetrated the ecosystem. It was originally a vendor-driven initiative to reduce their third party cost. The cost reduction was passed on to MNOs who were willing to risk implementing these new platforms. We have seen their implementation grow from  greenfield operators in emerging countries, to mature markets first at the periphery of the network, slowing making their way to business-critical infrastructure.

We are seeing today an analogous push to reduce costs further and ban proprietary hardware implementations with NFV. Pushed initially by operators, this initiative sees most network functions first transiting from hardware to software, then being run on virtualized environments on off-the-shelf hardware.

The first companies to embrace NFV have been "startup" like Affirmed Networks. First met with scepticism, the  company seems to have been able to design from scratch and deploy commercially a virtualized Evolved Packet Core in only 4 years. It certainly helps that the company was founded to the tune of over 100 millions dollars from big names such as T-Ventures and Vodafone, providing not only founding but presumably the lab capacity at their parent companies to test and fine tune the new technology.

Since then, vendors have started embracing the trend and are moving more or less enthusiastically towards virtualization of their offering. We have seen emerging different approaches, from the simple porting of their software to Xen or VMWare virtualized environments to more achieved openstack / openflow platforms.

I am actively investigating the field and I have to say some vendors' strategies are head scratching. In some cases, moving to a virtualized environment is counter-productive. Some telecom products are highly CPU intensive / specialized and require dedicated resource to attain high performance, scalability in a cost effective package. Deep packet inspection, video processing seem to be good examples. Even those vendors who have virtualized their appliance / solution when pushed will admit that virtualization will come at a performance cost at the state of the technology today.

I have been reading the specs (openflow, openstack) and I have to admit they seem far from the level of detail that we usually see in telco specs to be usable. A lot of abstraction, dedicated to redefining switching, not much in term of call flow, datagram, semantics, service definition, etc...

How the hell does one go about launching a service in a multivendor environment? Well, one doesn't. There is a reason why most NFV initiative are still at the plumbing level, investigating SDN, SDDC, etc... Or single vendor / single service approach. I haven't been convinced yet by anyone's implementation of multi vendor management, let alone "service orchestration". We are witnessing today islands of service virtualization in hybrid environments. We are still far from function virtualization per se.

The challenges are multiple: 
  • Which is better?: A dedicated platform with low footprint / power requirement that might be expensive and centralized or thousand of virtual instances occupying hundreds of servers that might be cheap (COTS) individually but collectively not very cost or power efficient?
  • Will network operator trade Capex for Opex when they need to manage thousand of applications running virtually on IT platforms? How will their personnel trained to troubleshoot problems following the traffic and signalling path will adapt to this fluid non-descript environment? 
We are still early in this game, but many vendors are starting to purposefully position themselves in this space to capture the next wave of revenue. 

Will the lack of a programmable multi vendor control environment force network operators to ultimately be virtualized themselves, relinquishing network management to the large IT and telecom equipment manufacturers? This is one of the questions I will attempt to answer going forward as I investigate in depth the state of the technology and compare it with the vendors and MNOs claims and assertions.
Stay tuned, more to come with a report on the technology, market trends and vendors capabilities in this space later on this year.

Tuesday, March 18, 2014

YouTube Sliced Bread: mobile indigestion?



Since 2012, YouTube has been trying to reduce dramatically the time it takes for a video to start from the moment you press play.  Flash Networks (Mobixell at the time) was among the first to detect a new proprietary implementation called sliced bread.

The matter might seem trivial, but internal research from Google show that most users find a waiting time exceeding 200ms unacceptable for short videos. 
YouTube has been developing a proprietary protocol, based on HTTP adaptive streaming DASH to decrease latency and start time for its videos.

YouTube Sliced Bread essentially compares the DASH ABR manifest with the speed and bandwidth that is available at the moment you press play and selects dynamically the closest encoding rate. Adjacent streams segments are being prepped in real time so that any change in bit rate directs a change in encoding bit rate stream dynamically. The sliced bread analogy comes in when you think as pressing play as if ordering a pre sliced loaf of bread. Only instead of getting all slices of the same size, your video player looks at the size of the connection over time and serves you slice by slice, HD 1080, 720, 360… based on what the network can support.

YouTube claims that Sliced Bread has reduced video re buffering by 40% on fixed networks. Additionally, until recently, YouTube used to download the viewing page, the CSS script and the video player for every video you click on. The company is now implementing logic to allow the player to remain from video to video, so that it does not have to be downloaded all over again. 

Furthermore, YouTube will soon start pre-loading related video content, so that if you click on a suggested video, it is already there. These “tricks” might work well in a fixed environment, where start time is paramount and video traffic volume is not relevant, but in a wireless network that is congested; these types of features would have a negative impact on the network capacity and ultimately the user experience. I have before warned about content providers' tendency to design services and technology for fixed line first.

The protocol is starting to make its appearance in mobile networks and while not yet dominating the YouTube experience, it is a perfect example of why a video service designed for the internet, to be viewed on a fixed network can have catastrophic consequences on a mobile network if not correctly adapted. This is one of the many subjects I analyse in my report "Mobile video monetization and optimization 2014".

Thursday, September 26, 2013

LTE Asia: transition from technology to value... or die

I am just back from LTE Asia in Singapore, where I chaired the track on Network Optimization. The show was well attended with over 900 people by Informa's estimate. 

Once again, I am a bit surprised and disappointed by the gap between operators and vendors' discourse.

By and large, operators who came (SK, KDDI, KT, Chungwha, HKCSL, Telkomsel, Indosat to name but a few) had excellent presentations on their past successes and current challenges, highlighting the need for new revenue models, a new content (particularly video) value chain and better customer engagement.

Vendors of all stripes seem to consistently miss the message and try to push technology when their customer need value. I appreciate that the transition is difficult and as I was reflecting with a vendor's executive at the show, selling technology feels somewhat safer and easier than value.
But, as many operators are finding out in their home turf, their consumers do not care much about technology any more. It is about brand, service, image and value that OTT service providers are winning consumers mind share. Here lies the risk and opportunity. Operators need help to evolve and re invent the mobile value chain. 

The value proposition of vendors must evolve towards solutions such as intelligent roaming, 2-way business models with content providers, service type prioritization (messaging, social, video, entertainment, sports...), bundling and charging...

At the heart of this necessary revolution is something that makes many uneasy. DPI and traffic classification, relying on ports and protocols is the basis of today's traffic management and is becoming rapidly obsolete. A new generation of traffic management engines is needed. The ability to recognize content and service types at a granular level is key. How can the mobile industry can evolve in the OTT world if operators are not able to recognize a content that is user-generated vs. Hollywood? How can operators monetize video if they cannot detect, recognize, prioritize, assure advertising content?

Operators have some key assets, though. Last mile delivery, accurate customer demographics, billing relationship and location must be leveraged. YouTube knows whether you are on iPad or laptop but not necessarily whether your cellular interface is 3G, HSPA, LTE... they certainly can't see whether a user's poor connection is the result of network congestion, spectrum interference, distance from the cell tower or throttling because the user exceeds its data allowance... There is value there, if operators are ready to transform themselves and their organization to harvest and sell value, not access...

Opportunities are many. Vendors who continue to sell SIP, IMS, VoLTE, Diameter and their next generation hip equivalent LTE Adavanced, 5G, cloud, NFV... will miss the point. None of these are of interest for the consumer. Even if the operator insist on buying or talking about technology, services and value will be key to success... unless you are planning to be an M2M operator, but that is a story for another time.

Monday, May 27, 2013

All bytes are not created equal...



Recent discussions with a number of my clients have brought to light a fundamental misconception. Mobile video is not data. It is not a different use case of data or a particular form of data, it is just a different service. The sooner network operators will understand that they cannot count, measure, control video the same way as browsing data, the sooner they will have a chance to integrate the value chain of delivering video.

Deep packet inspection engines count bytes, categorize traffic per protocol, bearer, URL, throttle and prioritize data flow based on rules that are video-myopic. Their concern is of Quality of Service (QoS) not Quality of Experience (QoE). Policy and charging engines decide meter and limit traffic in real-time based on the incomplete picture painted by DPIs and other network elements.

Not understanding whether traffic is video (or assuming it is video just based on the URL) can prove itself catastrophic for the user experience and their bill. How can traffic management engine instantiate video charging and prioritization rules if they cannot differentiate between download, progressive download, adaptive bit rate? How can they decide what is the appropriate bandwidth for a service if they do not understand what is the encoding of the video, what are the available bit rates, if it is HD or SD, what is the user expectation?

Content providers naturally push a content of the highest quality that the network can afford, smartphone and tablets try and grab as much network capacity available at the establishment of a session to guarantee user experience, often at the detriment of other connections / devices. It is wrong to assume that the quality of experience in video is the result of a harmonious negotiation between content, device and networks.
It is actually quite the opposite, each party pulling in their direction with conflicting priorities.
User experience suffers as a result and we have started to see instances of users complaining or churning due to bad video experience.

All bytes are not created equal. Video weighs heavier and has a larger emotional attachment than email or browsing services when it comes to the user's experience of a network's quality. This is one of the subjects I will be presenting at Informa's Mobile Video Global Summit in Berlin, next week.



Friday, September 28, 2012

How to weather signalling storms

I was struck a few months back when I heard an anecdote from Telecom Italia about a signalling storm in their network, bringing unanticipated outages. After investigation, the operator found out that the launch of Angry bird on Android had a major difference with the iOS version. It was a free app monetized through advertisement. Ads were being requested and served between each levels (or retry).
 If you are like me, you can easily go through 10 or more levels (mmmh... retries|) in a minute. Each one of these created a request going to the ad server, which generated queries to the subscriber database, location, charging engine over diameter resulting in +351% diameter traffic.
The traffic generated by one chatty app brought the network to its knees withing days of its launch.



As video traffic congestion becomes more prevalent and we see operators starting to measure subscriber's satisfaction in that area, we have seen several solutions emerge (video optimization, RAN optimization, policy management, HSPA +  and LTE upgrades, new pricing models...).
Signalling congestion, by contrast remains an emerging issue. I sat yesterday with Tekelec's Director of Strategic Marketing, Joanne Steinberg to discuss the topic and what should operators do about it.
Tekelec recently (September 2012) released its LTE Diameter Signalling Index. This report projects that diameter traffic will increase at a +252% CAGR until 2016 from 800k to 46 million messages per second globally. This is due to a radical change in applications behavior, as well as the new pricing and business models put in place by operators. Policy management, QoS management, metered charging, 2 sided business models and M2M traffic are some of the culprits highlighted in the report.

Diameter is a protocol that was invented originally to replace SS7 Radius, for the main purposes of Authentication, Authorization and Accounting (AAA). Real time charging and the evolution to IN drove its implementation. The protocol was created to be lighter than Radius, while extensible, with a variety of proprietary fields that could be added for specific uses. Its extensibility was the main criterion for its adoption as the protocol of choice for Policy and Charging functions.
Victim of its success, the protocol is now used in LTE for a variety of tasks ranging from querying subscriber databases (HSS), querying user balance and performing transactional charging and policy traffic.

Tekelec' signaling solutions, together with its policy product line (inherited from the Camiant acquisition), provides a variety of solution to handle the increasing load of diameter signaling traffic and is proposing its "Diameter Signaling Router as a means to manage, throttle, load balance and route diameter traffic".

In my opinion, data browsing is less predictable than voice or messaging traffic when it comes to signalling. While in the past a message at the establishment of the session, one at the end and optionally a few interim updates were sufficient, today sophisticated business models and price plans require a lot of signalling traffic. Additionally, diameter starts to be used to extend outside of the core packet network towards the RAN (for RAN optimization) and towards the internet (for OTT 2 sided business models). OTT content and app providers do not understand the functioning of mobile networks and we cannot expect device and app signalling traffic to self-regulate. While some 3GPP effort is expended to evaluate new architectures and rules such as fast dormancy, the problem is likely to grow faster than the standards' capacity to contain  it. I believe that diameter management and planning is necessary for network operators who are departing from all-you-can eat data plans and policy-driven traffic and charging models.

Tuesday, January 3, 2012

For or against Adaptive Bit Rate? part I: what is ABR?

Adaptive Bit Rate streaming (ABR) was invented to enable content providers to provide video streaming services in environment in which bandwidth would fluctuate. The benefit is clear, as a connection capacity changes over time, the video carried over that connection can vary its bit rate, and therefore its size to adapt to the network conditions.The player or client and the server exchange discrete information on the control plane throughout the transmission, whereby the server exposes the available bit rates for the video being streamed and the client selects the appropriate version, based on its reading of the current connection condition.

The technology is fundamental to help accommodate the growth of online video delivery over unmanaged (OTT) and wireless networks.
The implementation is as follow: a video file is encoded into different streams, at different bit rates. The player can "jump" from one stream to the other, as the condition of the transmission degrades or improves. A manifest document is exchanged between the server and the player at the establishment of the connection for the player to understand the list of versions and bit rates available for delivery.

Unfortunately, the main content delivery technology vendors then started to diverge from the standard implementation to differentiate and control better the user experience and the content provider community. We have reviewed some of these vendor strategies here. Below are the main implementations:

  • Apple HTTP Adaptive (Live) streaming (HLS) for iPhone and iPad: This version is implemented over HTTP and MPEG2 TS. It uses a proprietary manifest called m3u8. Apple creates different versions of the same streams (2 to 6, usually) and  breaks down the stream into little “chunks” to facilitate the client jumping from one stream to the other. This results in thousands of chunks for each stream, identified through timecode.Unfortunately, the content provider has to deal with the pain of managing thousands of fragments for each video stream. A costly implementation.
  • Microsoft IIS Smooth Streaming (Silverlight Windows phone 7): Microsoft has implemented fragmented MP4 (fMP4), to enable a stream to be separated in discrete fragments, again, to allow the player to jump from one fragment to the other as conditions change.  Microsoft uses AAC for audio and AVC/H264 for video compression. The implementation allows to group each video and audio stream, with all its fragments in a single file,  providing a more cost effective solution than Apple's.
  • Adobe HTTP Dynamic Streaming (HDS) for Flash: Adobe uses a proprietary format called F4F to allow delivery of flash videos over RTMP and HTTP. The Flash Media Server creates multiple streams, at different bit rate but also different quality levels.  Streams are full lengths (duration of video).

None of the implementations above are inter-operable, from a manifest or from a file perspective, which means that a content provider with one 1080p HD video could see himself creating one version for each player, multiplied by the number of streams to accommodate the bandwidth variation, multiplied by the number of segments, chunks or file for each version... As illustrated above, a simple video can result in 18 versions and thousand of fragments to manage. This is the reason why only 4 to 6% of current videos are transmitted using ABR. The rest of the traffic uses good old progressive download, with no capacity to adapt to changes in bandwidth, which explains in turn why wireless network operators (over 60 of them) have elected to implement video optimization systems in their networks. We will look, in my next posts, at the pros and cons of ABR and the complementary and competing technologies to achieve the same goals.

Find part II of this post here.

Tuesday, November 29, 2011

Need an IT manager for my connected home!

I am not really an early adopter. I tend to integrate new products and technologies when my needs change.
Until recently, my electronic devices were dumb and mute, just performing what I wanted to, either working or not.

In this new era of hyper connected homes though, everything becomes exponentially more complex as you add more connected devices. Since I have started my business, I had also to use cloud-based apps and services to expand my brick-and-mortar tools.
Now, with two desktops, a laptop, a tablet, two smartphones, a connected PVR, a PS3 and countless accounts and services from Dropbox, Youtube, Netflix, Google apps, Tweeter, Blogger... it does not take much to see how how these devices, interacting with all these apps and data points can quickly start conflicting with each other.
Especially when you layer that these devices communicate over LAN, Wifi, Bluetooth, RF, IR...
Add as well surveillance camera and energy management modules in the future and complex becomes complicated.

UPnP (Universal Plug and Play) and DLNA (Digital Living Network Alliance) usually do a good job of device discovery. Service and content discovery and priority setting is where it starts to get tricky.
Here are a few of the problems I had to face or am still facing in this hyper connected world.

Authentication and handover:
I use Rogers as a service provider for one of my smartphones. I use their self-help app to manage my bill, my subscription and travel packages. One of the things that is truly a problem is that it works only on a cellular network. Most of the times I need to use it is when I am travelling to add or remove a travel pack for voice, data or text. Because of the expensive roaming data rates, it does not make sense to connect while roaming just to enable a feature that saves me the roaming costs. Obviously, Rogers has not enabled Wifi - cellular authentication and credentials handover.

Authorization and software version control:
I am a Bell subscriber for TV and internet at home. I was excited when I received an email showing off Bell's new mobile TV and companion screen apps for my iPhone / iPad. I was less excited when my iPhone, on rogers network could not use Bell's content, even though I am a Bell customer. Too bad, but I thought at least I could use the PVR remote control with my iPad on Bell's network. Does not work either, because I would have to upgrade my PVR. A PVR, I am renting from Bell. You would think it would be possible for them to know what PVR I am using and therefore allow me to re flash the software to avail of new capabilities or try to up sell me to the latest new PVR and features...

Credentials management
At some point, security relents before complexity. When you want to run a secure network across several interfaces and devices, managing credentials with associated permissions becomes tricky. You have to find a way to have credentials that can easily be shared, remaining secure while managing what device has access to what dataset under which conditions.

Connectivity, content discovery  and sharing:
Inevitably, users buy new devices and add up capabilities along the way. The flip side of that coin, though is that it makes for a very heterogeneous environment. When you start having several devices with similar capabilities or overlaps, you want them to function with each other seamlessly. For instance, my old desktop running XP cannot easily join the workgroup of my new desktop and laptop running windows 7.
There are solutions, but none of them straightforward enough for a regular user. A last example is the fact that my laptop, my iPad, my iPhone, my PVR, my 2 desktops and my PS3 to some extend all act as media servers. They all have local content and they all access content from the cloud, the internet or local content stored in other devices. Again, I haven't found a solution yet that would allow me to manage and share content across devices with clear permission management. Additionally, there is no search or recommendation engine that would allow me to perform meta search across 1) my local content on several devices 2) the internet and OTT content providers and apps I am using 3) the electronic programming guide of my set top box and present me a choice like: do you want to watch boardwalk empire Sunday at 9 pm on HBO, now on HBO Go, buy the entire season on Amazon or play the episodes from my PVR or media servers.

Compatibility:
Too often, i have to transcode videos or change content format to ensure that I can see them on all my screens. This leads to multiple versions of the same content, with associated discoverability and version control issues. Another example is around contact management. It is incredible that Apple still does not get contact management right. If you enable iCloud and have your contacts synchronized with anything that is not apple (Google contacts or linked in) you end up with endless duplicates contacts with no hope to merge and delete without adding on new expensive apps.

Control and management:
It strikes me that with that many connected devices and apps, I have not found yet a single dashboard giving me visibility, control and management of all my devices, allowing me to allocate bandwidth, and permissions for sharing data and content across platforms.

I think at the end of the day, this field is still emerging and while it is possible to have a good implementation when purchasing a solution from scratch from a single vendor or service provider, assembling a solution organically as you add new devices is likely to have you spend hours deciphering DNS and DHCP configurations. I think what is needed in the short term is a gateway platform, acting as middle-ware, indexing and aggregating devices and content, providing a clear dashboard for permissions management and authorization. That gateway could be the set-top-box if it is powerful enough. It would give back to MSO the control they are loosing to OTT if they are willing to integrate and provide a cohesive environment.

Tuesday, May 31, 2011

Worst of breed, golden silo

After some 13-odd years in new technology and product introduction, you can't help but look at trends that pop up in this industry with a cynical eye.

One of the catch phrases I hear often is best-of-breed approach. For me, every time I hear it, it is a sure sign that a market segment or a technology is not mature.

It is somewhat counter-intuitive. Best-of-breed pick-and-choose componentized approach to service delivery hints at ranges of well defined components, fungible, interchangeable. You would think that the interfaces being well defined, each vendor competes on unique differentiators, without impacting negatively the service delivery.

Conversely, silo has become an increasingly bad word in telecoms, evoking poorly architected, proprietary daisy chain of components that cannot integrate gracefully in a modern organic network.

Then why is it that best-of-breed always end up taking longer, being more expensive and less reliable than a fully integrated solution from a single vendor?

In my mind, the standards that have been created to describe the ideal networks, from WAP to MMS, from IMS to LTE have been the product of too many vendor lobby-ism. The results in many case are vaguely defined physical and functional components, with lowest common denominator in term of interface and call flows.
The service definition being somewhat excluded from standards has left little in term of best practice to integrate functional components efficiently.

There is a reason in my mind why the Chinese vendors ZTE and Huawei are doing so well. It is not only because of their cost structure, it is because their all in-house technology approach for business critical components make sense.
It allows fast, replicable deployment and trouble shooting. There is much less complexity in integration and roll out, which is the most consuming part of CAPEX.

Whenever you see these vendors using third party technology, it is because it is either so mature and stable that it is not worth developing in-house or so specialized that it has not been developed yet.
In any case, we are talking about fringe technologies. Anything that is business critical is identified long in advance and developed in-house.
Their product and services might not be as sophisticated or differentiating than specialized vendors, but they deliver value by providing the minimum services at the lowest cost, with good enough reliability.

The companies that will win will either be small niche vendors at the periphery of the larger market opportunities or companies that will be good at providing better value, with stronger benefit, but at a price that  is equivalent.

Sunday, May 15, 2011

Mobile video 101: protocols, containers, formats & codecs

Mobile video as a technology and market segment can at times be a little complicated.
Here is simple syllabus, in no particular order of what you need to know to be conversant in mobile video. It is not intended to be exhaustive or very detailed, but rather to provide a knowledge base for those interested in understanding more the market dynamics I address in other posts.


Protocols:
There are many protocols used in wireless networks to deliver and control video. You have to differentiate between routing protocols (IP), transmission protocols (TCP & UDP), session control (RTP), application control (RTSP) and content control protocols (RTCP). I will focus here on application and content control.
These protocols are used to setup, transmit and control video over mobile networks

Here are the main ones:
  • RTSP (Real Time Streaming Protocol) is an industry protocol that has been created specifically for the purposes of media streaming. It is used to establish and control (play, stop, resume) a streaming session. It is used in many unicast on-deck mobile TV and VOD services.
  • RTCP (Real Time transport Control Protocol) is the content control protocol associated with RTP. It provides the statistics (packet loss, bit transmission, jitter...) necessary to allow a server to perform real-time media quality control on an RTSP stream.
  • HTTP download and progressive download (PD). HTTP is a generic protocol, used for the transport of many content formats, including video. Download and progressive download differentiate from each other in that the former needs the whole content to be delivered and saved to the device to be played asynchronously, while the later provides at the beginning of the session a set of metadata associated with the content which allow it to be played before its complete download.
    • Microsoft silverlight, Adobe RTMP and Apple progressive streaming. These three variants of progressive download are proprietary. They offer additional capabilities beyond the vanilla HTTP PD (pre-encoding and multiple streams delivery, client side stream selection, chunk delivery...) and are the subject of an intense war between the three companies to occupy the mindset of content developers and owners. This is the reason why you cannot browse a flash site or view a flash video in your iPhone.
Containers:
A container in video is a file that is composed of the payload (video, audio, subtitles, programming guide...) and the metadata (codecs, encoding rate, key frames, bit-rate...). The metadata is a set of descriptive files that indicate the nature of the media, its duration in the payload. The most popular are:
  • 3GPP (.3GP) 3GP is the format used in most mobile devices, as the recommended container for video by 3GPP standards.
  • MPEG-4 part 14 (.MP4) one of the most popular container for internet video.
  • Flash video (FLV, F4V). Adobe-created container, very popular as the preferred format for BBC, Google Video, Hulu, metacafe, Reuters, Yahoo video, YouTube... It requires a flash player.
  • MPEG-2 TS: MPEG Transport Stream is used for broadcast of audio and video. It is used in on-deck broadcast TV services in mobile and cable/ satellite video delivery.
Formats
Formats are a set of standards that describe how a video file should be played.

  • H.263 old codec used in legacy devices and applications. It is mandated by ETSI and 3GPP for IMS and MMS but is being replaced by H.264
  • H.264, MPEG4 part 10, AVC is a family of standards composed of several profiles for different use, device types, screen sizes... It is the most popular format in mobile video.
  • MPEG2 is a standard for lossy audio and video compression used in DVD, broadcast (digital TV, over the air, cable, satellite). MPEG2 describes two container types: MPEG2-TS for broadcast, MPEG-2 PS for files.
  • MPEG4 is an evolution of MPEG2, adding new functionalities such as DRM, 3D and error resilience for transmission over lossy channels (wireless for instance).  There are many features in MPEG 4, that are left to the developer to decide whether to implement or not. The features are grouped by profiles and levels. There are 28 profiles or part in MPEG 4. A codec usually describe which MPEG-4 parts are supported. It is the most popular format on the internet.
Codecs
Codec stands for encoding and decoding a media stream. It is a program that has the ability to decode a video stream and re encode it. Codecs are used for compression (lossless), optimization (lossy) and encryption of videos. A "raw" video file is usually stored in YCbCr (YUV) format which provides the full description of every pixel in a video. This format is descriptive, which requires a lot of space for storage and a lot of processing power for decoding / encoding. This is why a video is usually encoded in a different codec, to allow for a better size or variable transmission quality. It is important to understand that while a container obeys strict rules and semantics, codecs are not regulated and each vendor decides how to decode and encode a media format.
  • DivX Proprietary MPEg-4 implementation by DivX
  • WMV (Windows Media Video) - Microsoft proprietary
  • x264 a licenseable H.264 encoding spoftware
  • VP6, VP7, VP8... proprietary codecs developed by On2 technologies, acquired by Google and released as open source