You are on page 1of 96

Rich media Threads & The Boss

Nezer J. Zaidenberg

Todays topics... threads, Rich media and the boss.

Referances
For Threads - Chapters of APUE. For SDL and FFMpeg see their respectable help and documentation. There is a pretty good tutorial called how to write a video player in less then 1000 lines of code that can help help (but we dont use everything he shows there) don and he has a few bugs in his manual. (covered here)

Goals
We will discuss the decoding and encoding of rich media. (video and audio) This will introduce to us the problem of synching, which we will use to discuss threads and synchronisation This will all be covered in HW 2. As with networking. we dont care about video/audio don compression. we just learn how to use this environment.

IMPORTANT

Manual address - http://www.dranger.com/ffmpeg/ YOU HAVE TO USE IT FOR THE HOMEWORK.

Before we begin
The question is rich media handling part of the OS is handling open to debate. (Linux says it isnt. Windows says it is. isn EU says it isnt. USA says it is...) isn The Rich Media libraries we learn today are common in any Linux distribution but are not part of the Linux Kernel. The libraries are very portable and also work on Windows and used by players such as VLC. (but windows have other methods you COULD use)

Rich media concept Codec


Rich media in its row form is HUGE. (Just think about it SACD 5.1 audio channels at 96khz sampling rate each and 32bit per sample for 1 hour. > 8GB) - Even CDCD-Audio (16bit, 44k, 2 channels) is huge. ... and if you think that is huge picture 704*576*24bit per pixel per frame 25 frames (PAL) per sec for the same hour. > 100GB ... now think about full HD

The problem
Data quantities are huge. even unworkable. Solution Encode the media (compress it) so that we will lose some data. but the quality will remain good. Provide decoder (uncompress) to provide the media in good quality.

Introducing codecs
The compressor/decompressor pair is called codec (short for coder/decoder) Normally we will have separate codecs for video and audio. Most codecs are lossy meaning we lose a little quality in the encoding process. some codecs are lossless.

Trivial codec : VHS video


Drop resolution to 320*240 Drop frame rate to 20 fps 1 audio channel almost 10:1 compression compression Further improvement LP - got down to 10FPS. improvement Quality was good enough enough Modern codecs compress far more then 10:1 (1000:1(1000:110,000:1 is achievable)

Modern codecs
Video : MPEG2, MPEG4 (divx, microsoft, xvid), H.264 - can be compressed by 1:50-1:5000 depending on 1:50codec and quality. Audio : MP3, Vorbis, AAC, WMA (good quality audio can be compressed by 1:5-1:50) 1:5-

Decoding streams.

FFMPEG
We will be using ffmpeg library to encode and decode. ffmpeg is very object oriented C library. We will be using factory and facade/Interface design patterns.

These programs are based on tutorial1.c


decode1.c decode2.c decode3.c

Obtaining video

I used youtube downloader. you may install youdube downloader using sudo apt-get install youtube-dl aptyoutube-

Decoding
Our first task will be decoding. We will open an AVI file of Bruce Springsteens Springsteen Outlaw Pete performance and save the first frame. Pete By the end of the class we will play the complete video.

Installing ffmpeg (LATER)


run sudo synaptic to get to XUbuntu package manager. choose ffmpeg for install and approve installation of dependencies Click apply do similar steps for libavcodec-dev, libswscale-dev libavcodeclibswscaleand libavformat-dev libavutil-dev should already be libavformatlibavutilinstalled.

using ffmpeg

ffmpeg is usually used as a library by media players (such as vlc) but we can also use ffplay(1) and ffmpeg(1) these files are ffmpeg test utils.

compiling

gcc decode.c -o decode -lavcodec -lavformat -lavutil (feel free to create a makefile)

An empty program decode1.c

1 #include <libavcodec/avcodec.h> 2 #include <libavformat/avformat.h> 3 #include <libswscale/swscale.h> 4 #include <stdio.h> 5 6 int main() 7{ 8 9} av_register_all();

Explaining
the three include files are needed for decoding... They include the ffmpeg file format library(.AVI, .MPG etc.) and the ffmpeg codec. The av_register_all() is the factory initialization method

ERRATA IN THE MANUAL

The include paths in XUbuntu are as shown in my slides and not as in the manual!!!

Factory : reminder(I hope)


Situation : you want to create multiple objects with identical interface (say... codecs.) You dont want to hardcode anything and you want don codecs to be added on the fly You want old programmers to be able to request new codecs without recompile Solution : Factory. A DESIGN PATTERN.

DESIGN PATTERN (REMINDER - I HOPE)


Sometimes when we program we encounter a problem of how to construct something. (such as controlled global variable (singleton), An algorithmalgorithmimplementation decoupling (visitor), A functor (function object) etc. The solution (the idea) to these problem is generic. countless of developers implemented it. This kind of solutions are called design patterns.

So what is Factory
Factory o j ct co tai tho s.
r ist r all - which r ist r co stractor of all o j cts a associat ach o j ct with a y

il ( y) - which i y. o h a i r ist r all ll ol

s yo th ri ht o j ct as to r co w co c as il o ly o y

w o j cts yo tho s

ro ra s ca as for

Warning Lots of API follows


Don Dont try to understand every parameter.

And don

t keep the BOSS waiting

// decode1.c (based on tutorial1.c in the manual!) 9 AVFormatContext *pFormatCtx; 10 int i, videoStream; 11 AVCodecContext *pCodecCtx; 12 AVCodec *pCodec; 13 AVFrame *pFrame; 14 AVFrame *pFrameRGB; 15 AVPacket packet; 16 int frameFinished; 17 int numBytes; 18 uint8_t *buffer; 19 20 if(argc < 2) { 21 printf("Please provide a movie file\n"); 22 file\ return -1; 23 }

Don Dont try to understand We will use the variables later All that matters is we get the movie filename on the first argument

// decode1.c based on tutorial1.c 25 38

// open the video file 26 if(av_open_input_file(&pFormatCtx, argv[1], NULL, 0, NUL for(i=0; i<pFormatCtx->nb_streams; i++) 39 i<pFormatCtxif(pFormatCtx->streams[i]->codec-> if(pFormatCtx->streams[i]->codec-

This is actually important decode1.c


We scanned the file till we found a video (i.e. not audio or text) stream. Now we take the video stream and find a codec. To that we use the CODEC FACTORY.

// Decode2.c (based on tutorial1 still) 46 // Get a pointer to the codec context for the video stream 47 pCodecCtx=pFormatCtxpCodecCtx=pFormatCtx>streams[videoStream]>streams[videoStream]->codec; 48 49 // Find the decoder for the video stream 50 pCodec=avcodec_find_decoder(pCodecCtxpCodec=avcodec_find_decoder(pCodecCtx->codec_id); 51 if(pCodec==NULL) { 52 fprintf(stderr, "Unsupported codec! \n"); 53 codec!\ return -1; // Codec not found 54 } 55 // Open codec 56 if(avcodec_open(pCodecCtx, pCodec)<0) 57 return -1; // Could not open codec 58 59 // Allocate video frame 60 pFrame=avcodec_alloc_frame(); 61 if(pFrame==NULL) 62 return -1; 63 64 // Allocate an AVFrame structure 65 pFrameRGB=avcodec_alloc_frame(); 66 if(pFrameRGB==NULL)

Explaining decode2.c

We requested codec from the factory based on the context (the key) Then we called the codec constructor Last we init two frames.

69 // Determine required buffer size and allocate buffer 70 numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, 71 pCodecCtxpCodecCtxpCodecCtx->height); 72 buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); 73 74 // Assign appropriate parts of buffer to image planes in pFrameRGB 75 // Note that pFrameRGB is an AVFrame, but AVFrame is a superset 76 // of AVPicture 77 avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, 78 pCodecCtx->width, pCodecCtx->height); pCodecCtxpCodecCtx-

Decode3.c - decoding video

80 81 82 83 84 85

while(av_read_frame(pFormatCtx, &packet)>=0) { // Is this a packet from the video stream? if(packet.stream_index==videoStream) { // Decode video frame avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);

86 // Did we get a video frame? 87 if(frameFinished) { 88 // Convert the image from its native format to RGB 89 img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24, 90 (AVPicture*)pFrame, pCodecCtx->pix_fmt, 91 pCodecCtxSaveFrame(pFrameRGB, pCodecCtxpCodecCtxpCodecCtxpCodecCtx->width, pCodecCtx->height); 92 pCodecCtx>width, pCodecCtx->height); 93 pCodecCtxgoto exit; 94 } 95 } 96 // Free the packet that was allocated by av_read_frame 97 av_free_packet(&packet); 98 } 99 exit:100 av_free(buffer);101 av_free(pFrameRGB);102 // Free the YUV frame103 av_free(pFrame);104 // Close the codec105 avcodec_close(pCodecCtx);106 // Close the video file107 av_close_input_file(pFormatCtx);108 return 0;109 }

Explaining...
We can read audio or video packets from the stream but we can only decode video packets with this codec. We then check if we got complete frame and not partial If we do we decode it. We later convert the frame to raw so that we can save it Last we call some distractors

MISTAKE IN THE MANUAL

Manual uses img_convert funtion which doesnt exist doesn today (We use swscale) we will demonstrate how to implement or use swscale instead of img_convert.

111 void SaveFrame(AVFrame *pFrame, int width, int height) {112 FILE *pFile;113 char Filename[32];114 int y;115 // Open file116 sprintf(Filename, "frame.ppm");117 pFile=fopen(Filename, "wb");118 if(pFile==NULL) return;119 // Write header120 fprintf(pFile, "P6\n%d %d\n255\n", width, height);121 // Write pixel data122 for(y=0; y<height; "P6\ %d\n255\ y++) fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);123 // Close file124 fwrite(pFrame->data[0]+y*pFramefclose(pFile);125 }

Saving the image

We save the file in PPM format. That Thats a silly format that contain simple header and the packet in raw

Implementing img_convert
This function used to be part of FFMPEG but it was removed due to licensing issues. It was replaced by swscale - a more powerful interface. We will implement img_convert using swscale.

22 struct SwsContext *img_convert_ctx; ... 81 img_convert_ctx = sws_getContext(pCodecCtx->width, 82 sws_getContext(pCodecCtxpCodecCtx->height, pCodecCtxpCodecCtx->pix_fmt,pCodecCtx->width,pCodecCtxpCodecCtx->pix_fmt,pCodecCtx->width,pCodecCtx- >height, PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL); 83 if(img_convert_ctx == NULL) { 84 fprintf(stderr, "Cannot initialize the conversion context! \n"); 85 context!\ exit(1); 86 } ... 100 if(frameFinished) {101 sws_scale(img_convert_ctx, pFrame->data,102 pFramepFramepFrame->linesize, 0,103 pCodecCtx->height, pFrameRGB->data, pFrameRGBpCodecCtxpFrameRGBpFrameRGB>linesize);104 SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height) ; pCodecCtxpCodecCtx-

164

165 void img_convert(AVPicture * target , int targetFmt, AVPicture * source ,int sourceFmt,int w, int h) 166 { 167 168 169 170 static struct SwsContext *img_convert_ctx=NULL; if(img_convert_ctx == NULL) { img_convert_ctx = sws_getContext(w, h,

Another solution

171 sourceFmt, w, h, targetFmt, SWS_BICUBIC,NULL, NULL, NULL); 172 173 174 175 } sws_scale(img_convert_ctx, source->data, sourcesourcesource->linesize, 0, h, target->data, target->linesize); targettarget-

Display - decode4.c based on tutorial.c

In the beginning there was X


UNIX graphical user interface relay on X. X is a reverse client/server environment. The client (or terminal) runs an X server. (on the client workstation). The server runs the application (for example firefox) and it includes an X client. The X client request to open window on the X server.

Introducing X
Relays on network to deliver changes, keystrokes, mouse movements. Exist in some forms in every UNIX host. All work in the same X protocol. Some platform also have their own improved environment. these usually run on UDS.... Microsoft have similar concept with RDP. (but not their standard interface)

Introducing SDLimages. It includes X functionality is limited to 2D


functions to create window, panels, labels, buttons and all that jazz. It does not deal with video very well. (you can create bitmap of every image and refresh. but you will have problems with audio sync and refresh times) SDL is multi platform library that is used for most open source video and audio products. It also works on Windows and is used for example by VLC

Installing SDL

same steps as with ffmpeg. Start synaptic and download all libsdl and libsdl-devel libsdllibs. (there will be lots of libs and dependencies)

Decode4.c (similar to tutorial2.c)

We will now decode and display the stream as fast as we can.

Adding SDL and init

5 #include <SDL.h>

6 #include <SDL_thread.h> 27 28 29 30 31 32 33 34 35 36 } if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError() %s\ exit(1); ); SDL_Overlay *bmp; SDL_Surface *screen; SDL_Rect rect; SDL_Event event;

Initialising SDL objects

82 83 84 85 86 88 89 90 91 92

screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height, 0, 0); SDL_SetVideoMode(pCodecCtxpCodecCtxif(!screen) { fprintf(stderr, "SDL: could not set video mode - exiting\n"); exiting\ exit(1); } // Allocate a place to put our YUV image on that screen bmp = SDL_CreateYUVOverlay(pCodecCtx->width, SDL_CreateYUVOverlay(pCodecCtxpCodecCtxpCodecCtx->height, SDL_YV12_OVERLAY, screen);

117 118 119 120 121 122 123 124 125 126

if(frameFinished) { SDL_LockYUVOverlay(bmp); AVPicture pict; pict.data[0] = bmp->pixels[0]; bmppict.data[1] = bmp->pixels[2]; bmppict.data[2] = bmp->pixels[1]; bmp-

pict.linesize[0] = bmp->pitches[0]; bmppict.linesize[1] = bmp->pitches[2]; bmppict.linesize[2] = bmp->pitches[1]; bmp-

127 128 129 130 131 132 134 135 136 137 138 139 } // Convert the image into YUV format that SDL uses img_convert(&pict, PIX_FMT_YUV420P, (AVPicture *)pFrame, pCodecCtx->pix_fmt, pCodecCtxpCodecCtxpCodecCtx->width, pCodecCtx->height); pCodecCtxSDL_UnlockYUVOverlay(bmp) rect.x = 0; rect.y = 0; rect.w = pCodecCtx->width; pCodecCtxrect.h = pCodecCtx->height; pCodecCtxSDL_DisplayYUVOverlay(bmp, &rect);

RGB and YUV420P


RGB is the standard colour base. (each colour is explained by 3 vectors (1,0,0) ; (0,1,0) ; (0, 0, 1) representing RED, GREEN, BLUE The colour space can be expressed by other vectors. most common is YUV family YUV420P and YUV12 formats are identical except for order of components (hence the swap) is more common in decoder

Reminder - bugs in tutorials

directories no img convert. use the following function...

Img_convert (may be easier)

165 void img_convert(AVPicture * target , int targetFmt, AVPicture * source 166 { 167 168 169 170 171 172 173 174 175 176 } } sws_scale(img_convert_ctx, source->data, sourcesourcesource->linesize, 0, h, target->data, target->linesize); targettargetstatic struct SwsContext *img_convert_ctx=NULL; if(img_convert_ctx == NULL) { img_convert_ctx = sws_getContext(w, h, sourceFmt, w, h, targetFmt, SWS_BICUBIC,NULL, NULL, NULL);

,int sourceFmt,int w, int h)

Compiling SDL

`sdl-config --cflags --libs` will output the C headers and libs to use SDL. the `s make the output part of the command line. Total compile line : gcc decode4.c -o decode4 -lavcodec lavformat -lavutil -lswscale `sdl-config --cflags --libs`

Audio

Decoder5.c - corresponding to tutorial3.c


here our goal is to start decoding audio packets as they come. We will decode some video, some audio. We will not sync the streams. We will not yet discuss thread creation either...

Decoding stuff... in main()

184 185 ... 193 ...

AVCodecContext *aCodecCtx; AVCodec *aCodec;

SDL_AudioSpec wanted_spec, spec;

Change the stream scanner


217 // Find the first video stream 218 219 220 221 222 223 224 225 226 227 228 229 230 231 } if(videoStream==if(videoStream==-1) return -1; // Didn't find a video stream if(audioStream==if(audioStream==-1) return -1; } } if(pFormatCtx->streams[i]->codecif(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_AUDIO && audioStream < 0) { audioStream=i; videoStream=videoStream=-1; audioStream=audioStream=-1; for(i=0; i<pFormatCtx->nb_streams; i++) { i<pFormatCtxif(pFormatCtx->streams[i]->codecif(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO && videoStream < 0) { videoStream=i;

233 234 235 236 237 238 239 240 241 243 246 247 248 251 252

aCodecCtx=pFormatCtx->streams[audioStream]aCodecCtx=pFormatCtx->streams[audioStream]->codec; // Set audio settings from codec info wanted_spec.freq = aCodecCtx->sample_rate; aCodecCtxwanted_spec.format = AUDIO_S16SYS; wanted_spec.channels = aCodecCtx->channels; aCodecCtxwanted_spec.silence = 0; wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE; wanted_spec.callback = audio_callback; wanted_spec.userdata = aCodecCtx; if(SDL_OpenAudio(&wanted_spec, &spec) < 0) { } aCodec = avcodec_find_decoder(aCodecCtx->codec_id); avcodec_find_decoder(aCodecCtxif(!aCodec) { } avcodec_open(aCodecCtx, aCodec);

Important

packet_queue_init(&audioq); SDL_PauseAudio(0);

We are giving a callback function to the audio codec


This function will be called by SDL when ever SDL needs audio. SDL will start a thread for this purpose. (we dont see don thread creation it is done in the back ground) We start the thread with pause audio. when we call SDL_PauseAudio(0) we start playing.)

So what is this thread thing


Thread is a mini process ( a new program counter ) process that we share memory with. ( all the global variables, heap, etc. ) There is no OS memory protection between threads. When we have a decoding thread + playing thread as we have now we need to share lots of memory process become illogical!!! (think about how much info we will send!)

What is all this call back stuff


Callback is frequently used method in real life programming and dealing with threads. One thread is constantly running and calling a callback when its READY. example - we have a it frame grabber (a TV card) that calls a callback whenever it grabs a new frame, or a tape driver that calls a callback whenever its ready to receive data. it In our case we call the audio callback whenever we are ready to play. (note that we play at a certain rate such as 44.1Khz)

changes to packet read loop


302 while(av_read_frame(pFormatCtx, &packet)>=0) { 303 304 .. 333 334 335 336 337 338 } } else if(packet.stream_index==audioStream) { packet_queue_put(&audioq, &packet); } else { av_free_packet(&packet); // Is this a packet from the video stream? if(packet.stream_index==videoStream) {

SO ...

Our main thread - read audio packets put them in a queue... And SDL starts an audio thread - separate control that reads from queue

The audio callback


Just follow the principals - you dont need all don the APIs.

137 void audio_callback(void *userdata, Uint8 *stream, int len) { 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 while(len > 0) { if(audio_buf_index >= audio_buf_size) { /* We have already sent all our data; get more */ audio_size = audio_decode_frame(aCodecCtx, audio_buf, sizeof(audio_buf)); if(audio_size < 0) { /* If error, output silence */ audio_buf_size = 1024; memset(audio_buf, 0, audio_buf_size); static uint8_t audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2]; static unsigned int audio_buf_size = 0; static unsigned int audio_buf_index = 0; AVCodecContext *aCodecCtx = (AVCodecContext *)userdata; int len1, audio_size;

152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 } } } }

audio_buf_size = 1024; memset(audio_buf, 0, audio_buf_size); } else { audio_buf_size = audio_size;

audio_buf_index = 0;

len1 = audio_buf_size - audio_buf_index; if(len1 > len) len1 = len; memcpy(stream, (uint8_t *)audio_buf + audio_buf_index, len1); len -= len1; stream += len1; audio_buf_index += len1;

95 int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_size) { 96 97 98 99 100 102 103 104 105 106 107 109 110 111 112 113 } audio_pkt_data += len1; audio_pkt_size -= len1; static AVPacket pkt; static uint8_t *audio_pkt_data = NULL; static int audio_pkt_size = 0; int len1, data_size; for(;;) { while(audio_pkt_size > 0) { data_size = buf_size; len1 = avcodec_decode_audio2(aCodecCtx, (int16_t *)audio_buf, &data_size, audio_pkt_data, audio_pkt_size); if(len1 < 0) { audio_pkt_size = 0; break;

114 116 117 118 119 120 121 123 125 126 127 128 129 130 131 132 } } } }

if(data_size <= 0) { continue; } /* We have data, return it and come back for more later */ return data_size;

if(pkt.data) av_free_packet(&pkt); if(quit) { return -1; } if(packet_queue_get(&audioq, &pkt, 1) < 0) { return -1;

audio_pkt_data = pkt.data; audio_pkt_size = pkt.size;

So what did we see?

In the main thread - we put audio packets in a queue In the Audio thread (SDL opened that one for us, but trust me, its there...) - we get audio packets from the it queue...

Transfer information - the queue


typedef struct PacketQueue { AVPacketList *first_pkt, *last_pkt; int nb_packets; int size; SDL_mutex *mutex; SDL_cond *cond; } PacketQueue;

PacketQueue audioq;

int quit = 0;

void packet_queue_init(PacketQueue *q) { memset(q, 0, sizeof(PacketQueue));

SDL MUTEX and COND


Are just platform independent wrappers to the POSIX functions. By calling SDL functions instead of POSIX one you sometimes add minor overhead but gains portability! MUTEX and COND are the basic syncronization primitives!

MUTEX & COND


MUTEX O Tf r t r riti l ti it f r t riti l t rt r . t t rt riti l t MUTEX i t ti t t rt r l ti . COND - t r r r i l t COND i i COND ) l t l l t r l l i r t i t t ti

MUTEX ( t r t

putting packet in queue...

int packet_queue_put(PacketQueue *q, AVPacket *pkt) {

AVPacketList *pkt1; if(av_dup_packet(pkt) < 0) { return -1; } pkt1 = av_malloc(sizeof(AVPacketList)); if (!pkt1) return -1; pkt1->pkt = *pkt; pkt1->next = NULL;

SDL - lock mutex and signal

SDL_LockMutex(q->mutex);

if (!q->last_pkt)

q->first_pkt = pkt1; else q->last_pkt->next = pkt1; q->last_pkt = pkt1; q->nb_packets++; q->size += pkt1->pkt.size; SDL_CondSignal(q->cond);

SDL_UnlockMutex(q->mutex); return 0;

GET FROM SIGNAL


t ti i t t t( t * ,A t*

t, i t l

{ A i t r t; tLi t * t ;

S L L

M t

( -

);

f r(;;) {

if( r t r }

it) { - ; ;

q->nb_packets--; q->size -= p

ikt1->pkt.size; *pkt = pkt1->pkt; av_free(pkt1); ret = 1; break; } else if (!block) { ret = 0; break; } else { SDL_CondWait(q->cond, q->mutex); } }

The situation we have classic producer consumer.


We have a consumer - that takes packets out of queue and decoders and packets producer - that reads the packets and put them on shared memory. Situation - we want to limit access to the shared memory while another thread may want to access it. Consumer may not read packet until Producer inform its finished and its there (by signalling the thread.) it it

Let Lets go over this one more time.


In some cases I want to make sure that only I (I am a thread) access a memory section. In this case I need to use mutex. Mutex - mutually exclusion. In other cases I want to make sure that I access a memory section only after another thread prepared it for me. In that case I use cond.

Cond - why do I need mutex.


The cond is a facility allowing me to lock myself until another thread releases the lock (signal me) but my waiting and another thread releasing the cond is not atomic operations. so to avoid dead lock even when we need a cond we protect it with a mutex!

Threads protection and sync


We call mutex_lock() - that gurentee that only one thread will put packets in the queue or check the queue at any one time. We call cond_wait() - that will block us until we get a packet. This also unlock the relevant mutex. Please note that the mutex_lock or cond_wait functions does not lock anything. lock is completely advisory for threads that play the game. game

Starting threads decode6.c (corresponding to tutorial4.c)


now we start opening threads to make sure video quality improve. We will also get limited syncing

Threads - decoder6.c (tutorial4.c)


Creating threads : one is reading disk and put samples in a queue one is playing audio (internal) <we still have one more for video> main thread waits for events

ONE MORE PROBLEM IN GUIDE


The guide is using pstrcpy which is not coming with this version of ubuntu and ffmpeg. use strncat instead. change the order of 2nd and 3rd paramters

First thing we do
is create global variables (shared memory for the video thread.) we do it in struct videostate.

587 int main(int argc, char *argv[]) { 588 589 SDL_Event 590 591 VideoState 592 593 is = av_mallocz(sizeof(VideoState)); *is; event;

Creating the thread.


625 is->parse_tid = isSDL_CreateThread(decode_thread, is); 626 if(!is->parse_tid) { if(!is627 628 629 } av_free(is); return -1;

Explaining SDL_createThread()
Same as before with Mutexes- we create the thread Mutexesand use it using this function. its a wrapper to the it POSIX function. Note that here we have created the thread explicitly (for audio we relayed on SDL internal audio thread.) The SDL function get a pointer to a function - this is the main thread for the audio.

Initial thread structure...


Our main thread is now only used to wait for event (such as closing the stream etc.) The first thing we started is our decode thread which does most of our old main loop (puts things in queues for the other threads.) Now we have queue for video and queue for audio.

decode thread.
Examine - lines 500-580 in decode6. 500We are now reading packets using only this thread and putting what we read in two seperate queues. (one for video and one for audio with Mutex and Cond for each)

Video thread.
Examine the function : stream_component_open This function replaces the messy stream inspection we used in main. In this function we also open explicitly Video Playback thread. (With audio thread creation was implicit)

Video thread creation


473 case CODEC_TYPE_VIDEO: 474 isis->videoStream = stream_index;

475 is->video_st = pFormatCtxispFormatCtx>streams[stream_index]; 476 477 packet_queue_init(&ispacket_queue_init(&is->videoq);

478 is->video_tid = isSDL_CreateThread(video_thread, is);

What Whats left...


check decode7(tutorial5) and decode8(tutorial6) they work nicer! Check with stream with lip sync. READ THE CODE (THAT MEANS WRITE IT) DON DONT WAIT FOR THE LAST MINUTE.

Bonus and multi-platform multiprogramming


In modern environments we use libraries (such as SDL, ACE, NSPR etc.) to ensure our program runs on multiple platform and not call OS APIs directly. In this case you can get SDL and ffmpeg to run on windows or Mac. Try getting your client to run on one more platform. (bonus) If you can send makefile (or sln etc.) for both platforms

Further reading
read the tutorial atleast up to chapter 6. (needed for homework) I have created decode7.c and decode8.c for chapters 5 and 6. Check quality difference. especially in songs performed live.

You might also like