Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (60)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (11089)

  • Raspberry Pi Camera Module - Stream to LAN

    20 août 2015, par user3096434

    have a little problem with the setup of my RasPi camera infrastructure. Basically I have a RPi 2 which shall act as a MontionEye server from now on and 2 Pi B+ with camera modules.

    Previously, when I had only one camera in my network, I used the following command to stream the output from RPi B+ camera module to Youtube in full HD. So far, this command works flawless :

    raspivid -n -vf -hf -t 0 -w 1920 -h 1080 -fps 30 -b 3750000 -g 50 -o - | b ffmpeg -ar 8000 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 64k -g 50 -strict experimental -f flv $RTMP_URL/$STREAM_KEY

    Now I have a 2nd RPi with a camera module and figured it might be the time for a change towards motioneye, as I then can view both/all camera’s in my network within the same software. I have motioneye installed on my RPi 2 and the software is running correctly.

    I have a little problem when it comes to access the data stream from the RPi B+ camera on my local network.

    Basically I cannot seem to figure out how to change the ffmpeg portion of the above mentioned command, in a way so it will stream the data to localhost (Or the RPi2 IP where motioneye runs - which one to use ?) instead of Youtube or any other videohoster.

    I wonder, if changing the following part is a correct assumption :

    Instead of using variables to define Youtube URL and key

    -f flv $RTMP_URL/$STREAM_KEY

    And change this to

    -f flv 10.1.1.11:8080

    Will I then be able to add this RPi B+ video stream to my RPi 2 motioneye server, by using motioneye ’add network camera’ function ?

    From my understanding I should be able to enter the following details into motioneye ’add network camera’wizard :

    Camera type: network camera
    RTSP-URL: 10.1.1.11:8080
    User: Pi
    Pass: [my pwd]
    Camera: [my ffmpeg stream shall show here]

    Thanks in advance !

    Uhm, and then... How do I forwarded the video stream from a given camera connected to motioneye ? Like from motioneye to youtube (or similar), without re-encoding the stream ?

    Like the command shown above streams directly to youtube. But I want to have it in a way, that video is streamed to local network/motioneye server, and from there I can decide which camera’s stream and when I want to send the videostream to youtube ?

    How would a RPi professional realize this ?

    The command above explained : Takes full HD video with 30 fps from Pi camera module and hardware encodes it on GPU with 3.75mbit/s. Then I streamcopy the video (no re-encoding) and add some audio, so that the stream complies with youtube rules (yes, no live stream without audio). Audio is taken from virtual SB16 /dev/zero at low sampling rate then encoded to 32k AAC and sent to youtube. Works fine xD.

    Just when I have like 3 or more of these RPi cams the youtube stream approach ain’t feasible anymore, as my DSL upstream is limited (10 mbit/s=. Thus I need motioneye server and some magic, so I can watch f.e. all 3 camera’s videostream and then motioneye server can select and streamcopy the video from the Pi’s cam I choose and send it to youtube, as the original command did.

    Any help, tips, links to similar projects highly appreciated.

    Again, many thanks in advance, and even more thanks just cause you read until here.

    —mx

  • Understanding User Flow : Eight Practical Examples for Better UX

    4 octobre 2024, par Daniel Crough — Analytics Tips, UX

    Imagine trying to cook a complex dish without a recipe. You might have all the ingredients, but without a clear sequence of steps, you’re likely to end up with a mess rather than a masterpiece. Similarly, designing a website without understanding user flow is a recipe for confusion and lost conversions.

    User flows—the paths visitors take through your site—are the recipes for digital success. They provide a clear sequence of steps that guide users towards their goals, whether that’s making a purchase, signing up for a newsletter, or finding information. By understanding and optimising these flows, you can create a website that’s not just functional, but delightfully intuitive.

    In this article, we’ll explore seven practical user flow examples and show you how to apply these insights using web analytics tools. Whether you’re new to user experience (UX) design or a seasoned professional, you’ll find valuable takeaways to enhance your digital strategy.

    Read More

  • increasing memory occupancy while recording screen save to disk with ffmpeg [on hold]

    19 mai 2016, par vbtang

    wonder if there are any resoures i didn’t free ?or do i need to do something special so that i can free these momery ?

    ps. run the demo step by step, and found it has the same problem, until it quit the main function, it still have 40MB memory occupancy. And i found the memory increase obviously in the screen capture thread,but when it increased to about 150MB, it won’t increase, until i quit the program, it will have 40MB memory left. i feel confused.

    pss. i download ffmpeg dev and shared version form here https://ffmpeg.zeranoe.com/builds/,it seems a dll of debug version ? do i need a release version ?
    here is my demo code.

       #include "stdafx.h"

    #ifdef  __cplusplus
    extern "C"
    {
    #endif
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libswscale/swscale.h"
    #include "libavdevice/avdevice.h"
    #include "libavutil/audio_fifo.h"

    #pragma comment(lib, "avcodec.lib")
    #pragma comment(lib, "avformat.lib")
    #pragma comment(lib, "avutil.lib")
    #pragma comment(lib, "avdevice.lib")
    #pragma comment(lib, "avfilter.lib")

    //#pragma comment(lib, "avfilter.lib")
    //#pragma comment(lib, "postproc.lib")
    //#pragma comment(lib, "swresample.lib")
    #pragma comment(lib, "swscale.lib")
    #ifdef __cplusplus
    };
    #endif

    AVFormatContext *pFormatCtx_Video = NULL, *pFormatCtx_Audio = NULL, *pFormatCtx_Out = NULL;
    AVCodecContext  *pCodecCtx_Video;
    AVCodec         *pCodec_Video;
    AVFifoBuffer    *fifo_video = NULL;
    AVAudioFifo     *fifo_audio = NULL;
    int VideoIndex, AudioIndex;

    CRITICAL_SECTION AudioSection, VideoSection;



    SwsContext *img_convert_ctx;
    int frame_size = 0;

    uint8_t *picture_buf = NULL, *frame_buf = NULL;

    bool bCap = true;

    DWORD WINAPI ScreenCapThreadProc( LPVOID lpParam );
    DWORD WINAPI AudioCapThreadProc( LPVOID lpParam );

    int OpenVideoCapture()
    {
       AVInputFormat *ifmt=av_find_input_format("gdigrab");
       //
       AVDictionary *options = NULL;
       av_dict_set(&options, "framerate", "15", NULL);
       //av_dict_set(&options,"offset_x","20",0);
       //The distance from the top edge of the screen or desktop
       //av_dict_set(&options,"offset_y","40",0);
       //Video frame size. The default is to capture the full screen
       //av_dict_set(&options,"video_size","320x240",0);
       if(avformat_open_input(&pFormatCtx_Video, "desktop", ifmt, &options)!=0)
       {
           printf("Couldn't open input stream.(无法打开视频输入流)\n");
           return -1;
       }
       if(avformat_find_stream_info(pFormatCtx_Video,NULL)<0)
       {
           printf("Couldn't find stream information.(无法获取视频流信息)\n");
           return -1;
       }
       if (pFormatCtx_Video->streams[0]->codec->codec_type != AVMEDIA_TYPE_VIDEO)
       {
           printf("Couldn't find video stream information.(无法获取视频流信息)\n");
           return -1;
       }
       pCodecCtx_Video = pFormatCtx_Video->streams[0]->codec;
       pCodec_Video = avcodec_find_decoder(pCodecCtx_Video->codec_id);
       if(pCodec_Video == NULL)
       {
           printf("Codec not found.(没有找到解码器)\n");
           return -1;
       }
       if(avcodec_open2(pCodecCtx_Video, pCodec_Video, NULL) < 0)
       {
           printf("Could not open codec.(无法打开解码器)\n");
           return -1;
       }



       img_convert_ctx = sws_getContext(pCodecCtx_Video->width, pCodecCtx_Video->height, pCodecCtx_Video->pix_fmt,
           pCodecCtx_Video->width, pCodecCtx_Video->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);

       frame_size = avpicture_get_size(pCodecCtx_Video->pix_fmt, pCodecCtx_Video->width, pCodecCtx_Video->height);
       //
       fifo_video = av_fifo_alloc(30 * avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx_Video->width, pCodecCtx_Video->height));

       return 0;
    }

    static char *dup_wchar_to_utf8(wchar_t *w)
    {
       char *s = NULL;
       int l = WideCharToMultiByte(CP_UTF8, 0, w, -1, 0, 0, 0, 0);
       s = (char *) av_malloc(l);
       if (s)
           WideCharToMultiByte(CP_UTF8, 0, w, -1, s, l, 0, 0);
       return s;
    }

    int OpenAudioCapture()
    {
       //
       AVInputFormat *pAudioInputFmt = av_find_input_format("dshow");

       //
       char * psDevName = dup_wchar_to_utf8(L"audio=virtual-audio-capturer");

       if (avformat_open_input(&pFormatCtx_Audio, psDevName, pAudioInputFmt,NULL) < 0)
       {
           printf("Couldn't open input stream.(无法打开音频输入流)\n");
           return -1;
       }

       if(avformat_find_stream_info(pFormatCtx_Audio,NULL)<0)  
           return -1;

       if(pFormatCtx_Audio->streams[0]->codec->codec_type != AVMEDIA_TYPE_AUDIO)
       {
           printf("Couldn't find video stream information.(无法获取音频流信息)\n");
           return -1;
       }

       AVCodec *tmpCodec = avcodec_find_decoder(pFormatCtx_Audio->streams[0]->codec->codec_id);
       if(0 > avcodec_open2(pFormatCtx_Audio->streams[0]->codec, tmpCodec, NULL))
       {
           printf("can not find or open audio decoder!\n");
       }



       return 0;
    }

    int OpenOutPut()
    {
       AVStream *pVideoStream = NULL, *pAudioStream = NULL;
       const char *outFileName = "test.mp4";
       avformat_alloc_output_context2(&pFormatCtx_Out, NULL, NULL, outFileName);

       if (pFormatCtx_Video->streams[0]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
       {
           VideoIndex = 0;
           pVideoStream = avformat_new_stream(pFormatCtx_Out, NULL);

           if (!pVideoStream)
           {
               printf("can not new stream for output!\n");
               return -1;
           }

           //set codec context param
           pVideoStream->codec->codec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
           pVideoStream->codec->height = pFormatCtx_Video->streams[0]->codec->height;
           pVideoStream->codec->width = pFormatCtx_Video->streams[0]->codec->width;

           pVideoStream->codec->time_base = pFormatCtx_Video->streams[0]->codec->time_base;
           pVideoStream->codec->sample_aspect_ratio = pFormatCtx_Video->streams[0]->codec->sample_aspect_ratio;
           // take first format from list of supported formats
           pVideoStream->codec->pix_fmt = pFormatCtx_Out->streams[VideoIndex]->codec->codec->pix_fmts[0];

           //open encoder
           if (!pVideoStream->codec->codec)
           {
               printf("can not find the encoder!\n");
               return -1;
           }

           if (pFormatCtx_Out->oformat->flags & AVFMT_GLOBALHEADER)
               pVideoStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;

           if ((avcodec_open2(pVideoStream->codec, pVideoStream->codec->codec, NULL)) < 0)
           {
               printf("can not open the encoder\n");
               return -1;
           }
       }

       if(pFormatCtx_Audio->streams[0]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
       {
           AVCodecContext *pOutputCodecCtx;
           AudioIndex = 1;
           pAudioStream = avformat_new_stream(pFormatCtx_Out, NULL);

           pAudioStream->codec->codec = avcodec_find_encoder(pFormatCtx_Out->oformat->audio_codec);

           pOutputCodecCtx = pAudioStream->codec;

           pOutputCodecCtx->sample_rate = pFormatCtx_Audio->streams[0]->codec->sample_rate;
           pOutputCodecCtx->channel_layout = pFormatCtx_Out->streams[0]->codec->channel_layout;
           pOutputCodecCtx->channels = av_get_channel_layout_nb_channels(pAudioStream->codec->channel_layout);
           if(pOutputCodecCtx->channel_layout == 0)
           {
               pOutputCodecCtx->channel_layout = AV_CH_LAYOUT_STEREO;
               pOutputCodecCtx->channels = av_get_channel_layout_nb_channels(pOutputCodecCtx->channel_layout);

           }
           pOutputCodecCtx->sample_fmt = pAudioStream->codec->codec->sample_fmts[0];
           AVRational time_base={1, pAudioStream->codec->sample_rate};
           pAudioStream->time_base = time_base;
           //audioCodecCtx->time_base = time_base;

           pOutputCodecCtx->codec_tag = 0;  
           if (pFormatCtx_Out->oformat->flags & AVFMT_GLOBALHEADER)  
               pOutputCodecCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

           if (avcodec_open2(pOutputCodecCtx, pOutputCodecCtx->codec, 0) < 0)
           {
               //
               return -1;
           }
       }

       if (!(pFormatCtx_Out->oformat->flags & AVFMT_NOFILE))
       {
           if(avio_open(&pFormatCtx_Out->pb, outFileName, AVIO_FLAG_WRITE) < 0)
           {
               printf("can not open output file handle!\n");
               return -1;
           }
       }

       if(avformat_write_header(pFormatCtx_Out, NULL) < 0)
       {
           printf("can not write the header of the output file!\n");
           return -1;
       }

       return 0;
    }

    int _tmain(int argc, _TCHAR* argv[])
    {
       av_register_all();
       avdevice_register_all();
       if (OpenVideoCapture() < 0)
       {
           return -1;
       }
       if (OpenAudioCapture() < 0)
       {
           return -1;
       }
       if (OpenOutPut() < 0)
       {
           return -1;
       }

       InitializeCriticalSection(&VideoSection);
       InitializeCriticalSection(&AudioSection);

       AVFrame *picture = av_frame_alloc();
       int size = avpicture_get_size(pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt,
           pFormatCtx_Out->streams[VideoIndex]->codec->width, pFormatCtx_Out->streams[VideoIndex]->codec->height);
       picture_buf = new uint8_t[size];

       avpicture_fill((AVPicture *)picture, picture_buf,
           pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt,
           pFormatCtx_Out->streams[VideoIndex]->codec->width,
           pFormatCtx_Out->streams[VideoIndex]->codec->height);



       //star cap screen thread
       CreateThread( NULL, 0, ScreenCapThreadProc, 0, 0, NULL);
       //star cap audio thread
       CreateThread( NULL, 0, AudioCapThreadProc, 0, 0, NULL);
       int64_t cur_pts_v=0,cur_pts_a=0;
       int VideoFrameIndex = 0, AudioFrameIndex = 0;

       while(1)
       {
           if (_kbhit() != 0 && bCap)
           {
               bCap = false;
               Sleep(2000);//
           }
           if (fifo_audio && fifo_video)
           {
               int sizeAudio = av_audio_fifo_size(fifo_audio);
               int sizeVideo = av_fifo_size(fifo_video);
               //
               if (av_audio_fifo_size(fifo_audio) <= pFormatCtx_Out->streams[AudioIndex]->codec->frame_size &&
                   av_fifo_size(fifo_video) <= frame_size && !bCap)
               {
                   break;
               }
           }

           if(av_compare_ts(cur_pts_v, pFormatCtx_Out->streams[VideoIndex]->time_base,
               cur_pts_a,pFormatCtx_Out->streams[AudioIndex]->time_base) <= 0)
           {
               //read data from fifo
               if (av_fifo_size(fifo_video) < frame_size && !bCap)
               {
                   cur_pts_v = 0x7fffffffffffffff;
               }
               if(av_fifo_size(fifo_video) >= size)
               {
                   EnterCriticalSection(&VideoSection);
                   av_fifo_generic_read(fifo_video, picture_buf, size, NULL);
                   LeaveCriticalSection(&VideoSection);

                   avpicture_fill((AVPicture *)picture, picture_buf,
                       pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt,
                       pFormatCtx_Out->streams[VideoIndex]->codec->width,
                       pFormatCtx_Out->streams[VideoIndex]->codec->height);

                   //pts = n * ((1 / timbase)/ fps);
                   picture->pts = VideoFrameIndex * ((pFormatCtx_Video->streams[0]->time_base.den / pFormatCtx_Video->streams[0]->time_base.num) / 15);

                   int got_picture = 0;
                   AVPacket pkt;
                   av_init_packet(&pkt);

                   pkt.data = NULL;
                   pkt.size = 0;
                   int ret = avcodec_encode_video2(pFormatCtx_Out->streams[VideoIndex]->codec, &pkt, picture, &got_picture);
                   if(ret < 0)
                   {
                       //
                       continue;
                   }

                   if (got_picture==1)
                   {
                       pkt.stream_index = VideoIndex;
                       pkt.pts = av_rescale_q_rnd(pkt.pts, pFormatCtx_Video->streams[0]->time_base,
                           pFormatCtx_Out->streams[VideoIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));  
                       pkt.dts = av_rescale_q_rnd(pkt.dts,  pFormatCtx_Video->streams[0]->time_base,
                           pFormatCtx_Out->streams[VideoIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));  

                       pkt.duration = ((pFormatCtx_Out->streams[0]->time_base.den / pFormatCtx_Out->streams[0]->time_base.num) / 15);

                       cur_pts_v = pkt.pts;

                       ret = av_interleaved_write_frame(pFormatCtx_Out, &pkt);
                       //delete[] pkt.data;
                       av_free_packet(&pkt);
                   }
                   VideoFrameIndex++;
               }
           }
           else
           {
               if (NULL == fifo_audio)
               {
                   continue;//
               }
               if (av_audio_fifo_size(fifo_audio) < pFormatCtx_Out->streams[AudioIndex]->codec->frame_size && !bCap)
               {
                   cur_pts_a = 0x7fffffffffffffff;
               }
               if(av_audio_fifo_size(fifo_audio) >=
                   (pFormatCtx_Out->streams[AudioIndex]->codec->frame_size > 0 ? pFormatCtx_Out->streams[AudioIndex]->codec->frame_size : 1024))
               {
                   AVFrame *frame;
                   frame = av_frame_alloc();
                   frame->nb_samples = pFormatCtx_Out->streams[AudioIndex]->codec->frame_size>0 ? pFormatCtx_Out->streams[AudioIndex]->codec->frame_size: 1024;
                   frame->channel_layout = pFormatCtx_Out->streams[AudioIndex]->codec->channel_layout;
                   frame->format = pFormatCtx_Out->streams[AudioIndex]->codec->sample_fmt;
                   frame->sample_rate = pFormatCtx_Out->streams[AudioIndex]->codec->sample_rate;
                   av_frame_get_buffer(frame, 0);

                   EnterCriticalSection(&AudioSection);
                   av_audio_fifo_read(fifo_audio, (void **)frame->data,
                       (pFormatCtx_Out->streams[AudioIndex]->codec->frame_size > 0 ? pFormatCtx_Out->streams[AudioIndex]->codec->frame_size : 1024));
                   LeaveCriticalSection(&AudioSection);

                   if (pFormatCtx_Out->streams[0]->codec->sample_fmt != pFormatCtx_Audio->streams[AudioIndex]->codec->sample_fmt
                       || pFormatCtx_Out->streams[0]->codec->channels != pFormatCtx_Audio->streams[AudioIndex]->codec->channels
                       || pFormatCtx_Out->streams[0]->codec->sample_rate != pFormatCtx_Audio->streams[AudioIndex]->codec->sample_rate)
                   {
                       //
                   }

                   AVPacket pkt_out;
                   av_init_packet(&pkt_out);
                   int got_picture = -1;
                   pkt_out.data = NULL;
                   pkt_out.size = 0;

                   frame->pts = AudioFrameIndex * pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;
                   if (avcodec_encode_audio2(pFormatCtx_Out->streams[AudioIndex]->codec, &pkt_out, frame, &got_picture) < 0)
                   {
                       printf("can not decoder a frame");
                   }
                   av_frame_free(&frame);
                   if (got_picture)
                   {
                       pkt_out.stream_index = AudioIndex;
                       pkt_out.pts = AudioFrameIndex * pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;
                       pkt_out.dts = AudioFrameIndex * pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;
                       pkt_out.duration = pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;

                       cur_pts_a = pkt_out.pts;

                       int ret = av_interleaved_write_frame(pFormatCtx_Out, &pkt_out);
                       av_free_packet(&pkt_out);
                   }
                   AudioFrameIndex++;
               }
           }
       }

       delete[] picture_buf;

       delete[]frame_buf;
       av_fifo_free(fifo_video);
       av_audio_fifo_free(fifo_audio);

       av_write_trailer(pFormatCtx_Out);

       avio_close(pFormatCtx_Out->pb);
       avformat_free_context(pFormatCtx_Out);

       if (pFormatCtx_Video != NULL)
       {
           avformat_close_input(&pFormatCtx_Video);
           pFormatCtx_Video = NULL;
       }
       if (pFormatCtx_Audio != NULL)
       {
           avformat_close_input(&pFormatCtx_Audio);
           pFormatCtx_Audio = NULL;
       }
       if (NULL != img_convert_ctx)
       {
           sws_freeContext(img_convert_ctx);
           img_convert_ctx = NULL;
       }

       return 0;
    }

    DWORD WINAPI ScreenCapThreadProc( LPVOID lpParam )
    {
       AVPacket packet;/* = (AVPacket *)av_malloc(sizeof(AVPacket))*/;
       int got_picture;
       AVFrame *pFrame;
       pFrame= av_frame_alloc();

       AVFrame *picture = av_frame_alloc();
       int size = avpicture_get_size(pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt,
           pFormatCtx_Out->streams[VideoIndex]->codec->width, pFormatCtx_Out->streams[VideoIndex]->codec->height);
       //picture_buf = new uint8_t[size];

       avpicture_fill((AVPicture *)picture, picture_buf,
           pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt,
           pFormatCtx_Out->streams[VideoIndex]->codec->width,
           pFormatCtx_Out->streams[VideoIndex]->codec->height);

    //  FILE *p = NULL;
    //  p = fopen("proc_test.yuv", "wb+");
       av_init_packet(&packet);
       int height = pFormatCtx_Out->streams[VideoIndex]->codec->height;
       int width = pFormatCtx_Out->streams[VideoIndex]->codec->width;
       int y_size=height*width;
       while(bCap)
       {
           packet.data = NULL;
           packet.size = 0;
           if (av_read_frame(pFormatCtx_Video, &packet) < 0)
           {
               av_free_packet(&packet);
               continue;
           }
           if(packet.stream_index == 0)
           {
               if (avcodec_decode_video2(pCodecCtx_Video, pFrame, &got_picture, &packet) < 0)
               {
                   printf("Decode Error.(解码错误)\n");
                   continue;
               }
               if (got_picture)
               {
                   sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0,
                       pFormatCtx_Out->streams[VideoIndex]->codec->height, picture->data, picture->linesize);

                   if (av_fifo_space(fifo_video) >= size)
                   {
                       EnterCriticalSection(&VideoSection);                    
                       av_fifo_generic_write(fifo_video, picture->data[0], y_size, NULL);
                       av_fifo_generic_write(fifo_video, picture->data[1], y_size/4, NULL);
                       av_fifo_generic_write(fifo_video, picture->data[2], y_size/4, NULL);
                       LeaveCriticalSection(&VideoSection);
                   }
               }
           }
           av_free_packet(&packet);
           //Sleep(50);
       }
       av_frame_free(&pFrame);
       av_frame_free(&picture);
       //delete[] picture_buf;
       return 0;
    }

    DWORD WINAPI AudioCapThreadProc( LPVOID lpParam )
    {
       AVPacket pkt;
       AVFrame *frame;
       frame = av_frame_alloc();
       int gotframe;
       while(bCap)
       {
           pkt.data = NULL;
           pkt.size = 0;
           if(av_read_frame(pFormatCtx_Audio,&pkt) < 0)
           {
               av_free_packet(&pkt);
               continue;
           }

           if (avcodec_decode_audio4(pFormatCtx_Audio->streams[0]->codec, frame, &gotframe, &pkt) < 0)
           {
               av_frame_free(&frame);
               printf("can not decoder a frame");
               break;
           }
           av_free_packet(&pkt);

           if (!gotframe)
           {
               continue;//
           }

           if (NULL == fifo_audio)
           {
               fifo_audio = av_audio_fifo_alloc(pFormatCtx_Audio->streams[0]->codec->sample_fmt,
                   pFormatCtx_Audio->streams[0]->codec->channels, 30 * frame->nb_samples);
           }

           int buf_space = av_audio_fifo_space(fifo_audio);
           if (av_audio_fifo_space(fifo_audio) >= frame->nb_samples)
           {
               EnterCriticalSection(&AudioSection);
               av_audio_fifo_write(fifo_audio, (void **)frame->data, frame->nb_samples);
               LeaveCriticalSection(&AudioSection);
           }
       }
       av_frame_free(&frame);
       return 0;
    }