亚洲免费在线-亚洲免费在线播放-亚洲免费在线观看-亚洲免费在线观看视频-亚洲免费在线看-亚洲免费在线视频

Ffmpeg和SDL創(chuàng)建線程

系統(tǒng) 1860 0

Spawning Threads

Overview

Last time we added audio support by taking advantage of SDL's audio functions. SDL started a thread that made callbacks to a function we defined every time it needed audio. Now we're going to do the same sort of thing with the video display. This makes the code more modular and easier to work with - especially when we want to add syncing. So where do we start?

First we notice that our main function is handling an awful lot: it's running through the event loop, reading in packets, and decoding the video. So what we're going to do is split all those apart: we're going to have a thread that will be responsible for decoding the packets; these packets will then be added to the queue and read by the corresponding audio and video threads. The audio thread we have already set up the way we want it; the video thread will be a little more complicated since we have to display the video ourselves. We will add the actual display code to the main loop. But instead of just displaying video every time we loop, we will integrate the video display into the event loop. The idea is to decode the video, save the resulting frame in another queue, then create a custom event (FF_REFRESH_EVENT) that we add to the event system, then when our event loop sees this event, it will display the next frame in the queue. Here's a handy ASCII art illustration of what is going on:

      
        _______
      
       ________ audio? _______????? _____
    

|??????? | pkts |?????? |??? |???? | to spkr

| DECODE |----->| AUDIO |--->| SDL |-->

|________|????? |_______|??? |_____|

??? |? video???? _______

??? |?? pkts??? |?????? |

??? +---------->| VIDEO |

?________?????? |_______|?? _______

|?????? |????????? |?????? |?????? |

| EVENT |????????? +------>| VIDEO | to mon.

| LOOP? |----------------->| DISP. |-->

|_______|<---FF_REFRESH----|_______|

The main purpose of moving controlling the video display via the event loop is that using an SDL_Delay thread, we can control exactly when the next video frame shows up on the screen. When we finally sync the video in the next tutorial, it will be a simple matter to add the code that will schedule the next video refresh so the right picture is being shown on the screen at the right time.

Simplifying Code

We're also going to clean up the code a bit. We have all this audio and video codec information, and we're going to be adding queues and buffers and who knows what else. All this stuff is for one logical unit, viz. the movie. So we're going to make a large struct that will hold all that information called the VideoState.

typedef struct VideoState {

AVFormatContext *pFormatCtx;

int videoStream, audioStream;

AVStream *audio_st;

PacketQueue audioq;

uint8_t audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];

unsigned int audio_buf_size;

unsigned int audio_buf_index;

AVPacket audio_pkt;

uint8_t *audio_pkt_data;

int audio_pkt_size;

AVStream *video_st;

PacketQueue videoq;

VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE];

int pictq_size, pictq_rindex, pictq_windex;

SDL_mutex *pictq_mutex;

SDL_cond *pictq_cond;

SDL_Thread *parse_tid;

SDL_Thread *video_tid;

char filename[1024];

int quit;

} VideoState;

Here we see a glimpse of what we're going to get to. First we see the basic information - the format context and the indices of the audio and video stream, and the corresponding AVStream objects. Then we can see that we've moved some of those audio buffers into this structure. These (audio_buf, audio_buf_size, etc.) were all for information about audio that was still lying around (or the lack thereof). We've added another queue for the video, and a buffer (which will be used as a queue; we don't need any fancy queueing stuff for this) for the decoded frames (saved as an overlay). The VideoPicture struct is of our own creations (we'll see what's in it when we come to it). We also notice that we've allocated pointers for the two extra threads we will create, and the quit flag and the filename of the movie.

So now we take it all the way back to the main function to see how this changes our program. Let's set up our VideoState struct:

int main(int argc, char *argv[]) {

SDL_Event event;

VideoState *is;

is = av_mallocz(sizeof(VideoState));

av_mallocz() is a nice function that will allocate memory for us and zero it out.

Then we'll initialize our locks for the display buffer (pictq), because since the event loop calls our display function - the display function, remember, will be pulling pre-decoded frames from pictq. At the same time, our video decoder will be putting information into it - we don't know who will get there first. Hopefully you recognize that this is a classic race condition. So we allocate it now before we start any threads. Let's also copy the filename of our movie into our VideoState.

pstrcpy(is->filename, sizeof(is->filename), argv[1]);

is->pictq_mutex = SDL_CreateMutex();

is->pictq_cond = SDL_CreateCond();

pstrcpy is a function from ffmpeg that does some extra bounds checking beyond strncpy.

Our First Thread

Now let's finally launch our threads and get the real work done:

schedule_refresh(is, 40);

is->parse_tid = SDL_CreateThread(decode_thread, is);

if(!is->parse_tid) {

av_free(is);

return -1;

}

schedule_refresh is a function we will define later. What it basically does is tell the system to push a FF_REFRESH_EVENT after the specified number of milliseconds. This will in turn call the video refresh function when we see it in the event queue. But for now, let's look at SDL_CreateThread().

SDL_CreateThread() does just that - it spawns a new thread that has complete access to all the memory of the original process, and starts the thread running on the function we give it. It will also pass that function user-defined data. In this case, we're calling decode_thread() and with our VideoState struct attached. The first half of the function has nothing new; it simply does the work of opening the file and finding the index of the audio and video streams. The only thing we do different is save the format context in our big struct. After we've found our stream indices, we call another function that we will define, stream_component_open(). This is a pretty natural way to split things up, and since we do a lot of similar things to set up the video and audio codec, we reuse some code by making this a function.

The stream_component_open() function is where we will find our codec decoder, set up our audio options, save important information to our big struct, and launch our audio and video threads. This is where we would also insert other options, such as forcing the codec instead of autodetecting it and so forth. Here it is:

int stream_component_open(VideoState *is, int stream_index) {

AVFormatContext *pFormatCtx = is->pFormatCtx;

AVCodecContext *codecCtx;

AVCodec *codec;

SDL_AudioSpec wanted_spec, spec;

if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) {

return -1;

}

// Get a pointer to the codec context for the video stream

codecCtx = pFormatCtx->streams[stream_index]->codec;

if(codecCtx->codec_type == CODEC_TYPE_AUDIO) {

// Set audio settings from codec info

wanted_spec.freq = codecCtx->sample_rate;

/* .... */

wanted_spec.callback = audio_callback;

wanted_spec.userdata = is;

if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {

fprintf(stderr, "SDL_OpenAudio: %s/n", SDL_GetError());

return -1;

}

}

codec = avcodec_find_decoder(codecCtx->codec_id);

if(!codec || (avcodec_open(codecCtx, codec) < 0)) {

fprintf(stderr, "Unsupported codec!/n");

return -1;

}

switch(codecCtx->codec_type) {

case CODEC_TYPE_AUDIO:

is->audioStream = stream_index;

is->audio_st = pFormatCtx->streams[stream_index];

is->audio_buf_size = 0;

is->audio_buf_index = 0;

memset(&is->audio_pkt, 0, sizeof(is->audio_pkt));

packet_queue_init(&is->audioq);

SDL_PauseAudio(0);

break;

case CODEC_TYPE_VIDEO:

is->videoStream = stream_index;

is->video_st = pFormatCtx->streams[stream_index];

packet_queue_init(&is->videoq);

is->video_tid = SDL_CreateThread(video_thread, is);

break;

default:

break;

}

}This is pretty much the same as the code we had before, except now it's generalized for audio and video. Notice that instead of aCodecCtx, we've set up our big struct as the userdata for our audio callback. We've also saved the streams themselves as audio_st and video_st. We also have added our video queue and set it up in the same way we set up our audio queue. Most of the point is to launch the video and audio threads. These bits do it:

SDL_PauseAudio(0);

break;

/* ...... */

is->video_tid = SDL_CreateThread(video_thread, is);

We remember SDL_PauseAudio() from last time, and SDL_CreateThread() is used as in the exact same way as before. We'll get back to our video_thread() function.

Before that, let's go back to the second half of our decode_thread() function. It's basically just a for loop that will read in a packet and put it on the right queue:

for(;;) {

if(is->quit) {

break;

}

// seek stuff goes here

if(is->audioq.size > MAX_AUDIOQ_SIZE ||

is->videoq.size > MAX_VIDEOQ_SIZE) {

SDL_Delay(10);

continue;

}

if(av_read_frame(is->pFormatCtx, packet) < 0) {

if(url_ferror(&pFormatCtx->pb) == 0) {

SDL_Delay(100); /* no error; wait for user input */

continue;

} else {

break;

}

}

// Is this a packet from the video stream?

if(packet->stream_index == is->videoStream) {

packet_queue_put(&is->videoq, packet);

} else if(packet->stream_index == is->audioStream) {

packet_queue_put(&is->audioq, packet);

} else {

av_free_packet(packet);

}

}

這里沒有什么新東西,除了我們給音頻和視頻隊(duì)列限定了一個(gè)最大值并且我們添加一個(gè)檢測(cè)讀錯(cuò)誤的函數(shù)。格式上下文里面有一個(gè)叫做pb的 ByteIOContext類型結(jié)構(gòu)體。這個(gè)結(jié)構(gòu)體是用來保存一些低級(jí)的文件信息。函數(shù)url_ferror用來檢測(cè)結(jié)構(gòu)體并發(fā)現(xiàn)是否有些讀取文件錯(cuò)誤。

在循環(huán)以后,我們的代碼是用等待其余的程序結(jié)束和提示我們已經(jīng)結(jié)束的。這些代碼是有益的,因?yàn)樗甘境隽巳绾悟?qū)動(dòng)事件--后面我們將顯示影像。

while(!is->quit) {

SDL_Delay(100);

}

fail:

if(1){

SDL_Event event;

event.type = FF_QUIT_EVENT;

event.user.data1 = is;

SDL_PushEvent(&event);

}

return 0;

我們使用SDL常量SDL_USEREVENT來從用戶事件中得到值。第一個(gè)用戶事件的值應(yīng)當(dāng)是SDL_USEREVENT,下一個(gè)是 SDL_USEREVENT+1并且依此類推。在我們的程序中FF_QUIT_EVENT被定義成SDL_USEREVENT+2。如果喜歡,我們也可以傳遞用戶數(shù)據(jù),在這里我們傳遞的是大結(jié)構(gòu)體的指針。最后我們調(diào)用SDL_PushEvent()函數(shù)。在我們的事件分支中,我們只是像以前放入 SDL_QUIT_EVENT部分一樣。我們將在自己的事件隊(duì)列中詳細(xì)討論,現(xiàn)在只是確保我們正確放入了FF_QUIT_EVENT事件,我們將在后面捕捉到它并且設(shè)置我們的退出標(biāo)志quit。

得到幀:video_thread

當(dāng)我們準(zhǔn)備好解碼器后,我們開始視頻線程。這個(gè)線程從視頻隊(duì)列中讀取包,把它解碼成視頻幀,然后調(diào)用queue_picture函數(shù)把處理好的幀放入到圖片隊(duì)列中:

int video_thread(void *arg) {

VideoState *is = (VideoState *)arg;

AVPacket pkt1, *packet = &pkt1;

int len1, frameFinished;

AVFrame *pFrame;

pFrame = avcodec_alloc_frame();

for(;;) {

if(packet_queue_get(&is->videoq, packet, 1) < 0) {

// means we quit getting packets

break;

}

// Decode video frame

len1 = avcodec_decode_video(is->video_st->codec, pFrame, &frameFinished,

packet->data, packet->size);

// Did we get a video frame?

if(frameFinished) {

if(queue_picture(is, pFrame) < 0) {

break;

}

}

av_free_packet(packet);

}

av_free(pFrame);

return 0;

}

在這里的很多函數(shù)應(yīng)該很熟悉吧。我們把a(bǔ)vcodec_decode_video函數(shù)移到了這里,替換了一些參數(shù),例如:我們把AVStream保存在我們自己的大結(jié)構(gòu)體中,所以我們可以從那里得到編解碼器的信息。我們僅僅是不斷的從視頻隊(duì)列中取包一直到有人告訴我們要停止或者出錯(cuò)為止。

把幀隊(duì)列化

讓我們看一下保存解碼后的幀pFrame到圖像隊(duì)列中去的函數(shù)。因?yàn)槲覀兊膱D像隊(duì)列是SDL的覆蓋的集合(基本上不用讓視頻顯示函數(shù)再做計(jì)算了),我們需要把幀轉(zhuǎn)換成相應(yīng)的格式。我們保存到圖像隊(duì)列中的數(shù)據(jù)是我們自己做的一個(gè)結(jié)構(gòu)體。

typedef struct VideoPicture {

SDL_Overlay *bmp;

int width, height;

int allocated;

} VideoPicture;

我們的大結(jié)構(gòu)體有一個(gè)可以保存這些緩沖區(qū)。然而,我們需要自己來申請(qǐng)SDL_Overlay(注意:allocated標(biāo)志會(huì)指明我們是否已經(jīng)做了這個(gè)申請(qǐng)的動(dòng)作與否)。

為了使用這個(gè)隊(duì)列,我們有兩個(gè)指針--寫入指針和讀取指針。我們也要保證一定數(shù)量的實(shí)際數(shù)據(jù)在緩沖中。要寫入到隊(duì)列中,我們先要等待緩沖清空以便于有位置來保存我們的VideoPicture。然后我們檢查看我們是否已經(jīng)申請(qǐng)到了一個(gè)可以寫入覆蓋的索引號(hào)。如果沒有,我們要申請(qǐng)一段空間。我們也要重新申請(qǐng)緩沖如果窗口的大小已經(jīng)改變。然而,為了避免被鎖定,盡量避免在這里申請(qǐng)(我現(xiàn)在還不太清楚原因;我相信是為了避免在其它線程中調(diào)用SDL覆蓋函數(shù)的原因)。

int queue_picture(VideoState *is, AVFrame *pFrame) {

VideoPicture *vp;

int dst_pix_fmt;

AVPicture pict;

SDL_LockMutex(is->pictq_mutex);

while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE &&

!is->quit) {

SDL_CondWait(is->pictq_cond, is->pictq_mutex);

}

SDL_UnlockMutex(is->pictq_mutex);

if(is->quit)

return -1;

// windex is set to 0 initially

vp = &is->pictq[is->pictq_windex];

if(!vp->bmp ||

vp->width != is->video_st->codec->width ||

vp->height != is->video_st->codec->height) {

SDL_Event event;

vp->allocated = 0;

event.type = FF_ALLOC_EVENT;

event.user.data1 = is;

SDL_PushEvent(&event);

SDL_LockMutex(is->pictq_mutex);

while(!vp->allocated && !is->quit) {

SDL_CondWait(is->pictq_cond, is->pictq_mutex);

}

SDL_UnlockMutex(is->pictq_mutex);

if(is->quit) {

return -1;

}

}

這里的事件機(jī)制與前面我們想要退出的時(shí)候看到的一樣。我們已經(jīng)定義了事件FF_ALLOC_EVENT作為SDL_USEREVENT。我們把事件發(fā)到事件隊(duì)列中然后等待申請(qǐng)內(nèi)存的函數(shù)設(shè)置好條件變量。

讓我們來看一看如何來修改事件循環(huán):

for(;;) {

SDL_WaitEvent(&event);

switch(event.type) {

case FF_ALLOC_EVENT:

alloc_picture(event.user.data1);

break;

記住event.user.data1是我們的大結(jié)構(gòu)體。就這么簡(jiǎn)單。讓我們看一下alloc_picture()函數(shù):

void alloc_picture(void *userdata) {

VideoState *is = (VideoState *)userdata;

VideoPicture *vp;

vp = &is->pictq[is->pictq_windex];

if(vp->bmp) {

// we already have one make another, bigger/smaller

SDL_FreeYUVOverlay(vp->bmp);

}

// Allocate a place to put our YUV image on that screen

vp->bmp = SDL_CreateYUVOverlay(is->video_st->codec->width,

is->video_st->codec->height,

SDL_YV12_OVERLAY,

screen);

vp->width = is->video_st->codec->width;

vp->height = is->video_st->codec->height;

SDL_LockMutex(is->pictq_mutex);

vp->allocated = 1;

SDL_CondSignal(is->pictq_cond);

SDL_UnlockMutex(is->pictq_mutex);

}

你可以看到我們把SDL_CreateYUVOverlay函數(shù)從主循環(huán)中移到了這里。這段代碼應(yīng)該完全可以自我注釋。記住我們把高度和寬度保存到VideoPicture結(jié)構(gòu)體中因?yàn)槲覀冃枰4嫖覀兊囊曨l的大小沒有因?yàn)槟承┰蚨淖儭?

好,我們幾乎已經(jīng)全部解決并且可以申請(qǐng)到Y(jié)UV覆蓋和準(zhǔn)備好接收?qǐng)D像。讓我們回顧一下queue_picture并看一個(gè)拷貝幀到覆蓋的代碼。你應(yīng)該能認(rèn)出其中的一部分:

int queue_picture(VideoState *is, AVFrame *pFrame) {

if(vp->bmp) {

SDL_LockYUVOverlay(vp->bmp);

dst_pix_fmt = PIX_FMT_YUV420P;

pict.data[0] = vp->bmp->pixels[0];

pict.data[1] = vp->bmp->pixels[2];

pict.data[2] = vp->bmp->pixels[1];

pict.linesize[0] = vp->bmp->pitches[0];

pict.linesize[1] = vp->bmp->pitches[2];

pict.linesize[2] = vp->bmp->pitches[1];

// Convert the image into YUV format that SDL uses

img_convert(&pict, dst_pix_fmt,

(AVPicture *)pFrame, is->video_st->codec->pix_fmt,

is->video_st->codec->width, is->video_st->codec->height);

SDL_UnlockYUVOverlay(vp->bmp);

if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) {

is->pictq_windex = 0;

}

SDL_LockMutex(is->pictq_mutex);

is->pictq_size++;

SDL_UnlockMutex(is->pictq_mutex);

}

return 0;

}

這部分代碼和前面用到的一樣,主要是簡(jiǎn)單的用我們的幀來填充YUV覆蓋。最后一點(diǎn)只是簡(jiǎn)單的給隊(duì)列加1。這個(gè)隊(duì)列在寫的時(shí)候會(huì)一直寫入到滿為止,在讀的時(shí)候會(huì)一直讀空為止。因此所有的都依賴于is->pictq_size值,這要求我們必需要鎖定它。這里我們做的是增加寫指針(在必要的時(shí)候采用輪轉(zhuǎn)的方式),然后鎖定隊(duì)列并且增加尺寸?,F(xiàn)在我們的讀者函數(shù)將會(huì)知道隊(duì)列中有了更多的信息,當(dāng)隊(duì)列滿的時(shí)候,我們的寫入函數(shù)也會(huì)知道。

顯示視頻

這就是我們的視頻線程。現(xiàn)在我們看過了幾乎所有的線程除了一個(gè)--記得我們調(diào)用schedule_refresh()函數(shù)嗎?讓我們看一下實(shí)際中是如何做的:

static void schedule_refresh(VideoState *is, int delay) {

SDL_AddTimer(delay, sdl_refresh_timer_cb, is);

}

函數(shù)SDL_AddTimer()是SDL中的一個(gè)定時(shí)(特定的毫秒)執(zhí)行用戶定義的回調(diào)函數(shù)(可以帶一些參數(shù)user data)的簡(jiǎn)單函數(shù)。我們將用這個(gè)函數(shù)來定時(shí)刷新視頻--每次我們調(diào)用這個(gè)函數(shù)的時(shí)候,它將設(shè)置一個(gè)定時(shí)器來觸發(fā)定時(shí)事件來把一幀從圖像隊(duì)列中顯示到屏幕上。

但是,讓我們先觸發(fā)那個(gè)事件。

static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) {

SDL_Event event;

event.type = FF_REFRESH_EVENT;

event.user.data1 = opaque;

SDL_PushEvent(&event);

return 0;

}

這里向隊(duì)列中寫入了一個(gè)現(xiàn)在很熟悉的事件。FF_REFRESH_EVENT被定義成SDL_USEREVENT+1。要注意的一件事是當(dāng)返回0的時(shí)候,SDL停止定時(shí)器,于是回調(diào)就不會(huì)再發(fā)生。

現(xiàn)在我們產(chǎn)生了一個(gè)FF_REFRESH_EVENT事件,我們需要在事件循環(huán)中處理它:

for(;;) {

SDL_WaitEvent(&event);

switch(event.type) {

case FF_REFRESH_EVENT:

video_refresh_timer(event.user.data1);

break;

于是我們就運(yùn)行到了這個(gè)函數(shù),在這個(gè)函數(shù)中會(huì)把數(shù)據(jù)從圖像隊(duì)列中取出:

void video_refresh_timer(void *userdata) {

VideoState *is = (VideoState *)userdata;

VideoPicture *vp;

if(is->video_st) {

if(is->pictq_size == 0) {

schedule_refresh(is, 1);

} else {

vp = &is->pictq[is->pictq_rindex];

schedule_refresh(is, 80);

video_display(is);

if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) {

is->pictq_rindex = 0;

}

SDL_LockMutex(is->pictq_mutex);

is->pictq_size--;

SDL_CondSignal(is->pictq_cond);

SDL_UnlockMutex(is->pictq_mutex);

}

} else {

schedule_refresh(is, 100);

}

}

現(xiàn)在,這只是一個(gè)極其簡(jiǎn)單的函數(shù):當(dāng)隊(duì)列中有數(shù)據(jù)的時(shí)候,他從其中獲得數(shù)據(jù),為下一幀設(shè)置定時(shí)器,調(diào)用video_display函數(shù)來真正顯示圖像到屏幕上,然后把隊(duì)列讀索引值加1,并且把隊(duì)列的尺寸size減1。你可能會(huì)注意到在這個(gè)函數(shù)中我們并沒有真正對(duì)vp做一些實(shí)際的動(dòng)作,原因是這樣的:我們將在后面處理。我們將在后面同步音頻和視頻的時(shí)候用它來訪問時(shí)間信息。你會(huì)在這里看到這個(gè)注釋信息“timing密碼here”。那里我們將討論什么時(shí)候顯示下一幀視頻,然后把相應(yīng)的值寫入到schedule_refresh()函數(shù)中。現(xiàn)在我們只是隨便寫入一個(gè)值80。從技術(shù)上來講,你可以猜測(cè)并驗(yàn)證這個(gè)值,并且為每個(gè)電影重新編譯程序,但是:1)過一段時(shí)間它會(huì)漂移;2)這種方式是很笨的。我們將在后面來討論它。

我們幾乎做完了;我們僅僅剩了最后一件事:顯示視頻!下面就是video_display函數(shù):

void video_display(VideoState *is) {

SDL_Rect rect;

VideoPicture *vp;

AVPicture pict;

float aspect_ratio;

int w, h, x, y;

int i;

vp = &is->pictq[is->pictq_rindex];

if(vp->bmp) {

if(is->video_st->codec->sample_aspect_ratio.num == 0) {

aspect_ratio = 0;

} else {

aspect_ratio = av_q2d(is->video_st->codec->sample_aspect_ratio) *

is->video_st->codec->width / is->video_st->codec->height;

}

if(aspect_ratio <= 0.0) {

aspect_ratio = (float)is->video_st->codec->width /

(float)is->video_st->codec->height;

}

h = screen->h;

w = ((int)rint(h * aspect_ratio)) & -3;

if(w > screen->w) {

w = screen->w;

h = ((int)rint(w / aspect_ratio)) & -3;

}

x = (screen->w - w) / 2;

y = (screen->h - h) / 2;

rect.x = x;

rect.y = y;

rect.w = w;

rect.h = h;

SDL_DisplayYUVOverlay(vp->bmp, &rect);

}

}

因?yàn)槲覀兊钠聊豢梢允侨我獬叽纾ㄎ覀冊(cè)O(shè)置為640x480并且用戶可以自己來改變尺寸),我們需要?jiǎng)討B(tài)計(jì)算出我們顯示的圖像的矩形大小。所以一開始我們需要計(jì)算出電影的縱橫比aspect ratio,表示方式為寬度除以高度。某些編解碼器會(huì)有奇數(shù)采樣縱橫比,只是簡(jiǎn)單表示了一個(gè)像素或者一個(gè)采樣的寬度除以高度的比例。因?yàn)閷挾群透叨仍谖覀兊木幗獯a器中是用像素為單位的,所以實(shí)際的縱橫比與縱橫比乘以樣本縱橫比相同。某些編解碼器會(huì)顯示縱橫比為0,這表示每個(gè)像素的縱橫比為1x1。然后我們把電影縮放到適合屏幕的盡可能大的尺寸。這里的& -3表示與-3做與運(yùn)算,實(shí)際上是讓它們4字節(jié)對(duì)齊。然后我們把電影移到中心

位置,接著調(diào)用SDL_DisplayYUVOverlay()函數(shù)。

結(jié)果是什么?我們做完了嗎?嗯,我們?nèi)匀灰匦赂膶懧曇舨糠值拇a來使用新的VideoStruct結(jié)構(gòu)體,但是那些只是嘗試著改變,你可以看一下那些參考示例代碼。最后我們要做的是改變ffmpeg提供的默認(rèn)退出回調(diào)函數(shù)為我們的退出回調(diào)函數(shù)。

VideoState *global_video_state;

int decode_interrupt_cb(void) {

return (global_video_state && global_video_state->quit);

}

我們?cè)谥骱瘮?shù)中為大結(jié)構(gòu)體設(shè)置了global_video_state。

這就是了!讓我們編譯它:

gcc -o tutorial04 tutorial04.c -lavutil -lavformat -lavcodec -lz -lm /

`sdl-config --cflags --libs`

請(qǐng)享受一下沒有經(jīng)過同步的電影!下次我們將編譯一個(gè)可以最終工作的電影播放器。

?

原文出自 http://blog.csdn.net/jinhaijian/article/details/5831335

Ffmpeg和SDL創(chuàng)建線程


更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號(hào)聯(lián)系: 360901061

您的支持是博主寫作最大的動(dòng)力,如果您喜歡我的文章,感覺我的文章對(duì)您有幫助,請(qǐng)用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點(diǎn)擊下面給點(diǎn)支持吧,站長(zhǎng)非常感激您!手機(jī)微信長(zhǎng)按不能支付解決辦法:請(qǐng)將微信支付二維碼保存到相冊(cè),切換到微信,然后點(diǎn)擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對(duì)您有幫助就好】

您的支持是博主寫作最大的動(dòng)力,如果您喜歡我的文章,感覺我的文章對(duì)您有幫助,請(qǐng)用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長(zhǎng)會(huì)非常 感謝您的哦!!!

發(fā)表我的評(píng)論
最新評(píng)論 總共0條評(píng)論
主站蜘蛛池模板: 国产在线美女 | 国产91精品系列在线观看 | 7m凹凸国产刺激在线视频 | 国产精品四虎视频一区 | 天天噜夜夜操 | 亚洲精品98久久久久久中文字幕 | 伊在人香蕉99久久 | 一级毛片视频播放 | 欧美精品一区二区三区在线播放 | 综合欧美亚洲 | 亚洲国产欧美一区 | 综合99| 久久香蕉国产精品一区二区三 | a毛片在线免费观看 | 91尤物国产尤物福利 | 中国性xxxxx极品奶水 | 99久久99久久精品免费看子 | 成人毛片免费观看视频在线 | 久久视频在线观看免费 | 欧美综合亚洲图片综合区 | 欧美国产精品 | 久久成人激情视频 | 一级午夜a毛片免费视频 | 日韩欧美成人免费中文字幕 | 色综合网站在线 | 久久噜 | 91精品视频网 | 九九热在线视频观看 | 精品国产综合区久久久久99 | 欧美成年黄网站色高清视频 | 久久国产加勒比精品无码 | 美女被cao的视频免费看 | 999视频在线观看 | 国产黄大片在线观 | 亚洲网址在线观看 | 波多野结衣中文字幕久久 | 97视频在线免费 | 久久国产精品范冰啊 | 亚洲国产九九精品一区二区 | 99热久久这里只有精品首页 | 亚洲精品国产v片在线观看 亚洲精品国产啊女成拍色拍 |