Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (13)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (4031)

  • Xuggle - Concatenate two videos - Error - java.lang.RuntimeException : error -1094995529 decoding audio

    1er avril 2013, par user2232357

    I am using the Xuggle API to concatenate two MPEG videos (with Audio inbuilt in the MPEGs).
    I am referring to the https://code.google.com/p/xuggle/source/browse/trunk/java/xuggle-xuggler/src/com/xuggle/mediatool/demos/ConcatenateAudioAndVideo.java?r=929. (my both inputs and output are MPEGs).

    Getting the bellow error.

    14:06:50.139 [main] ERROR org.ffmpeg - [mp2 @ 0x7fd54693d000] incomplete frame
    java.lang.RuntimeException: error -1094995529 decoding audio
       at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:549)
       at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
       at com.tav.factory.video.XuggleMediaCreator.concatenateAllVideos(XuggleMediaCreator.java:271)
       at com.tav.factory.video.XuggleMediaCreator.main(XuggleMediaCreator.java:446)

    Can anyone help mw with this ??? Thanks in Advance..

    Here is the complete code.

    public String concatenateAllVideos(ArrayList<tavtexttoavrequest> list){
           String finalPath="";


           String sourceUrl1 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage2.mpeg";
           String sourceUrl2 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage3.mpeg";
           String destinationUrl = "/Users/SSID/WS/SampleTTS/page2/z_AV_TAVImage_Final23.mpeg";

           out.printf("transcode %s + %s -> %s\n", sourceUrl1, sourceUrl2,
             destinationUrl);

           //////////////////////////////////////////////////////////////////////
           //                                                                  //
           // NOTE: be sure that the audio and video parameters match those of //
           // your input media                                                 //
           //                                                                  //
           //////////////////////////////////////////////////////////////////////

           // video parameters

           final int videoStreamIndex = 0;
           final int videoStreamId = 0;
           final int width = 400;
           final int height = 400;

           // audio parameters

           final int audioStreamIndex = 1;
           final int audioStreamId = 0;
           final int channelCount = 1;
           final int sampleRate = 16000 ; // Hz 16000 44100;

           // create the first media reader

           IMediaReader reader1 = ToolFactory.makeReader(sourceUrl1);

           // create the second media reader

           IMediaReader reader2 = ToolFactory.makeReader(sourceUrl2);

           // create the media concatenator

           MediaConcatenator concatenator = new MediaConcatenator(audioStreamIndex,
             videoStreamIndex);

           // concatenator listens to both readers

           reader1.addListener(concatenator);
           reader2.addListener(concatenator);

           // create the media writer which listens to the concatenator

           IMediaWriter writer = ToolFactory.makeWriter(destinationUrl);
           concatenator.addListener(writer);

           // add the video stream

           writer.addVideoStream(videoStreamIndex, videoStreamId, width, height);

           // add the audio stream

           writer.addAudioStream(audioStreamIndex, audioStreamId, channelCount,sampleRate);

           // read packets from the first source file until done

           try {
               while (reader1.readPacket() == null)
                 ;
           } catch (Exception e) {
               // TODO Auto-generated catch block
               e.printStackTrace();
           }

           // read packets from the second source file until done

           try {
               while (reader2.readPacket() == null)
                 ;
           } catch (Exception e) {
               // TODO Auto-generated catch block
               e.printStackTrace();
           }

           // close the writer

           writer.close();


           return finalPath;
       }

       static class MediaConcatenator extends MediaToolAdapter
         {
           // the current offset

           private long mOffset = 0;

           // the next video timestamp

           private long mNextVideo = 0;

           // the next audio timestamp

           private long mNextAudio = 0;

           // the index of the audio stream

           private final int mAudoStreamIndex;

           // the index of the video stream

           private final int mVideoStreamIndex;

           /**
            * Create a concatenator.
            *
            * @param audioStreamIndex index of audio stream
            * @param videoStreamIndex index of video stream
            */

           public MediaConcatenator(int audioStreamIndex, int videoStreamIndex)
           {
             mAudoStreamIndex = audioStreamIndex;
             mVideoStreamIndex = videoStreamIndex;
           }

           public void onAudioSamples(IAudioSamplesEvent event)
           {
             IAudioSamples samples = event.getAudioSamples();

             // set the new time stamp to the original plus the offset established
             // for this media file

             long newTimeStamp = samples.getTimeStamp() + mOffset;

             // keep track of predicted time of the next audio samples, if the end
             // of the media file is encountered, then the offset will be adjusted
             // to this time.

             mNextAudio = samples.getNextPts();

             // set the new timestamp on audio samples

             samples.setTimeStamp(newTimeStamp);

             // create a new audio samples event with the one true audio stream
             // index

             super.onAudioSamples(new AudioSamplesEvent(this, samples,
               mAudoStreamIndex));
           }

           public void onVideoPicture(IVideoPictureEvent event)
           {
             IVideoPicture picture = event.getMediaData();
             long originalTimeStamp = picture.getTimeStamp();

             // set the new time stamp to the original plus the offset established
             // for this media file

             long newTimeStamp = originalTimeStamp + mOffset;

             // keep track of predicted time of the next video picture, if the end
             // of the media file is encountered, then the offset will be adjusted
             // to this this time.
             //
             // You&#39;ll note in the audio samples listener above we used
             // a method called getNextPts().  Video pictures don&#39;t have
             // a similar method because frame-rates can be variable, so
             // we don&#39;t now.  The minimum thing we do know though (since
             // all media containers require media to have monotonically
             // increasing time stamps), is that the next video timestamp
             // should be at least one tick ahead.  So, we fake it.

             mNextVideo = originalTimeStamp + 1;

             // set the new timestamp on video samples

             picture.setTimeStamp(newTimeStamp);

             // create a new video picture event with the one true video stream
             // index

             super.onVideoPicture(new VideoPictureEvent(this, picture,
               mVideoStreamIndex));
           }

           public void onClose(ICloseEvent event)
           {
             // update the offset by the larger of the next expected audio or video
             // frame time

             mOffset = Math.max(mNextVideo, mNextAudio);

             if (mNextAudio &lt; mNextVideo)
             {
               // In this case we know that there is more video in the
               // last file that we read than audio. Technically you
               // should pad the audio in the output file with enough
               // samples to fill that gap, as many media players (e.g.
               // Quicktime, Microsoft Media Player, MPlayer) actually
               // ignore audio time stamps and just play audio sequentially.
               // If you don&#39;t pad, in those players it may look like
               // audio and video is getting out of sync.

               // However kiddies, this is demo code, so that code
               // is left as an exercise for the readers. As a hint,
               // see the IAudioSamples.defaultPtsToSamples(...) methods.
             }
           }

           public void onAddStream(IAddStreamEvent event)
           {
             // overridden to ensure that add stream events are not passed down
             // the tool chain to the writer, which could cause problems
           }

           public void onOpen(IOpenEvent event)
           {
             // overridden to ensure that open events are not passed down the tool
             // chain to the writer, which could cause problems
           }

           public void onOpenCoder(IOpenCoderEvent event)
           {
             // overridden to ensure that open coder events are not passed down the
             // tool chain to the writer, which could cause problems
           }

           public void onCloseCoder(ICloseCoderEvent event)
           {
             // overridden to ensure that close coder events are not passed down the
             // tool chain to the writer, which could cause problems
           }
         }
    </tavtexttoavrequest>
  • Trying to sync audio/visual using FFMpeg and openAL

    22 août 2013, par user1379811

    hI have been studying dranger ffmpeg tutorial which explains how to sync audio and visual once you have the frames displayed and audio playing which is where im at.

    Unfortunately, the tutorial is out of date (Stephen Dranger explaained that himself to me) and also uses sdl which im not doing - this is for Blackberry 10 application.

    I just cannot make the video frames display at the correct speed (they are just playing very fast) and I have been trying for over a week now - seriously !

    I have 3 threads happening - one to read from stream into audio and video queues and then 2 threads for audio and video.

    If somebody could explain whats happening after scanning my relevent code you would be a lifesaver.

    The delay (what I pass to usleep(testDelay) seems to be going up (incrementing) which doesn't seem right to me.

    count = 1;
       MyApp* inst = worker->app;//(VideoUploadFacebook*)arg;
       qDebug() &lt;&lt; "\n start loadstream";
       w = new QWaitCondition();
       w2 = new QWaitCondition();
       context = avformat_alloc_context();
       inst->threadStarted = true;
       cout &lt;&lt; "start of decoding thread";
       cout.flush();


       av_register_all();
       avcodec_register_all();
       avformat_network_init();
       av_log_set_callback(&amp;log_callback);
       AVInputFormat   *pFormat;
       //const char      device[]     = "/dev/video0";
       const char      formatName[] = "mp4";
       cout &lt;&lt; "2start of decoding thread";
       cout.flush();



       if (!(pFormat = av_find_input_format(formatName))) {
           printf("can&#39;t find input format %s\n", formatName);
           //return void*;
       }
       //open rtsp
       if(avformat_open_input(&amp;context, inst->capturedUrl.data(), pFormat,NULL) != 0){
           // return ;
           cout &lt;&lt; "error opening of decoding thread: " &lt;&lt; inst->capturedUrl.data();
           cout.flush();
       }

       cout &lt;&lt; "3start of decoding thread";
       cout.flush();
       // av_dump_format(context, 0, inst->capturedUrl.data(), 0);
       /*   if(avformat_find_stream_info(context,NULL) &lt; 0){
           return EXIT_FAILURE;
       }
        */
       //search video stream
       for(int i =0;inb_streams;i++){
           if(context->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
               inst->video_stream_index = i;
       }
       cout &lt;&lt; "3z start of decoding thread";
       cout.flush();
       AVFormatContext* oc = avformat_alloc_context();
       av_read_play(context);//play RTSP
       AVDictionary *optionsDict = NULL;
       ccontext = context->streams[inst->video_stream_index]->codec;

       inst->audioc = context->streams[1]->codec;

       cout &lt;&lt; "4start of decoding thread";
       cout.flush();
       codec = avcodec_find_decoder(ccontext->codec_id);
       ccontext->pix_fmt = PIX_FMT_YUV420P;

       AVCodec* audio_codec = avcodec_find_decoder(inst->audioc->codec_id);
       inst->packet = new AVPacket();
       if (!audio_codec) {
           cout &lt;&lt; "audio codec not found\n"; //fflush( stdout );
           exit(1);
       }

       if (avcodec_open2(inst->audioc, audio_codec, NULL) &lt; 0) {
           cout &lt;&lt; "could not open codec\n"; //fflush( stdout );
           exit(1);
       }

       if (avcodec_open2(ccontext, codec, &amp;optionsDict) &lt; 0) exit(1);

       cout &lt;&lt; "5start of decoding thread";
       cout.flush();
       inst->pic = avcodec_alloc_frame();

       av_init_packet(inst->packet);

       while(av_read_frame(context,inst->packet) >= 0 &amp;&amp; &amp;inst->keepGoing)
       {

           if(inst->packet->stream_index == 0){//packet is video

               int check = 0;



               // av_init_packet(inst->packet);
               int result = avcodec_decode_video2(ccontext, inst->pic, &amp;check, inst->packet);

               if(check)
                   break;
           }
       }



       inst->originalVideoWidth = inst->pic->width;
       inst->originalVideoHeight = inst->pic->height;
       float aspect = (float)inst->originalVideoHeight / (float)inst->originalVideoWidth;
       inst->newVideoWidth = inst->originalVideoWidth;
       int newHeight = (int)(inst->newVideoWidth * aspect);
       inst->newVideoHeight = newHeight;//(int)inst->originalVideoHeight / inst->originalVideoWidth * inst->newVideoWidth;// = new height
       int size = avpicture_get_size(PIX_FMT_YUV420P, inst->originalVideoWidth, inst->originalVideoHeight);
       uint8_t* picture_buf = (uint8_t*)(av_malloc(size));
       avpicture_fill((AVPicture *) inst->pic, picture_buf, PIX_FMT_YUV420P, inst->originalVideoWidth, inst->originalVideoHeight);

       picrgb = avcodec_alloc_frame();
       int size2 = avpicture_get_size(PIX_FMT_YUV420P, inst->newVideoWidth, inst->newVideoHeight);
       uint8_t* picture_buf2 = (uint8_t*)(av_malloc(size2));
       avpicture_fill((AVPicture *) picrgb, picture_buf2, PIX_FMT_YUV420P, inst->newVideoWidth, inst->newVideoHeight);



       if(ccontext->pix_fmt != PIX_FMT_YUV420P)
       {
           std::cout &lt;&lt; "fmt != 420!!!: " &lt;&lt; ccontext->pix_fmt &lt;&lt; std::endl;//
           // return (EXIT_SUCCESS);//-1;

       }


       if (inst->createForeignWindow(inst->myForeignWindow->windowGroup(),
               "HelloForeignWindowAppIDqq", 0,
               0, inst->newVideoWidth,
               inst->newVideoHeight)) {

       } else {
           qDebug() &lt;&lt; "The ForeginWindow was not properly initialized";
       }




       inst->keepGoing = true;

       inst->img_convert_ctx = sws_getContext(inst->originalVideoWidth, inst->originalVideoHeight, PIX_FMT_YUV420P, inst->newVideoWidth, inst->newVideoHeight,
               PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);

       is = (VideoState*)av_mallocz(sizeof(VideoState));
       if (!is)
           return NULL;

       is->audioStream = 1;
       is->audio_st = context->streams[1];
       is->audio_buf_size = 0;
       is->audio_buf_index = 0;
       is->videoStream = 0;
       is->video_st = context->streams[0];

       is->frame_timer = (double)av_gettime() / 1000000.0;
       is->frame_last_delay = 40e-3;

       is->av_sync_type = DEFAULT_AV_SYNC_TYPE;
       //av_strlcpy(is->filename, filename, sizeof(is->filename));
       is->iformat = pFormat;
       is->ytop    = 0;
       is->xleft   = 0;

       /* start video display */
       is->pictq_mutex = new QMutex();
       is->pictq_cond  = new QWaitCondition();

       is->subpq_mutex = new QMutex();
       is->subpq_cond  = new QWaitCondition();

       is->video_current_pts_time = av_gettime();


       packet_queue_init(&amp;audioq);

       packet_queue_init(&amp;videoq);
       is->audioq = audioq;
       is->videoq = videoq;
       AVPacket* packet2  = new AVPacket();

       ccontext->get_buffer = our_get_buffer;
       ccontext->release_buffer = our_release_buffer;


       av_init_packet(packet2);
       while(inst->keepGoing)
       {


           if(av_read_frame(context,packet2) &lt; 0 &amp;&amp; keepGoing)
           {
               printf("bufferframe Could not read a frame from stream.\n");
               fflush( stdout );


           }else {



               if(packet2->stream_index == 0) {
                   packet_queue_put(&amp;videoq, packet2);
               } else if(packet2->stream_index == 1) {
                   packet_queue_put(&amp;audioq, packet2);
               } else {
                   av_free_packet(packet2);
               }


               if(!videoThreadStarted)
               {
                   videoThreadStarted = true;
                   QThread* thread = new QThread;
                   videoThread = new VideoStreamWorker(this);

                   // Give QThread ownership of Worker Object
                   videoThread->moveToThread(thread);
                   connect(videoThread, SIGNAL(error(QString)), this, SLOT(errorHandler(QString)));
                   QObject::connect(videoThread, SIGNAL(refreshNeeded()), this, SLOT(refreshNeededSlot()));
                   connect(thread, SIGNAL(started()), videoThread, SLOT(doWork()));
                   connect(videoThread, SIGNAL(finished()), thread, SLOT(quit()));
                   connect(videoThread, SIGNAL(finished()), videoThread, SLOT(deleteLater()));
                   connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));

                   thread->start();
               }

               if(!audioThreadStarted)
               {
                   audioThreadStarted = true;
                   QThread* thread = new QThread;
                   AudioStreamWorker* videoThread = new AudioStreamWorker(this);

                   // Give QThread ownership of Worker Object
                   videoThread->moveToThread(thread);

                   // Connect videoThread error signal to this errorHandler SLOT.
                   connect(videoThread, SIGNAL(error(QString)), this, SLOT(errorHandler(QString)));

                   // Connects the thread’s started() signal to the process() slot in the videoThread, causing it to start.
                   connect(thread, SIGNAL(started()), videoThread, SLOT(doWork()));
                   connect(videoThread, SIGNAL(finished()), thread, SLOT(quit()));
                   connect(videoThread, SIGNAL(finished()), videoThread, SLOT(deleteLater()));

                   // Make sure the thread object is deleted after execution has finished.
                   connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));

                   thread->start();
               }

           }

       } //finished main loop

       int MyApp::video_thread() {
       //VideoState *is = (VideoState *)arg;
       AVPacket pkt1, *packet = &amp;pkt1;
       int len1, frameFinished;

       double pts;
       pic = avcodec_alloc_frame();

       for(;;) {
           if(packet_queue_get(&amp;videoq, packet, 1) &lt; 0) {
               // means we quit getting packets
               break;
           }

           pts = 0;

           global_video_pkt_pts2 = packet->pts;
           // Decode video frame
           len1 =  avcodec_decode_video2(ccontext, pic, &amp;frameFinished, packet);
           if(packet->dts == AV_NOPTS_VALUE
                   &amp;&amp; pic->opaque &amp;&amp; *(uint64_t*)pic->opaque != AV_NOPTS_VALUE) {
               pts = *(uint64_t *)pic->opaque;
           } else if(packet->dts != AV_NOPTS_VALUE) {
               pts = packet->dts;
           } else {
               pts = 0;
           }
           pts *= av_q2d(is->video_st->time_base);
           // Did we get a video frame?

                   if(frameFinished) {
                       pts = synchronize_video(is, pic, pts);
                       actualPts = pts;
                       refreshSlot();
                   }
                   av_free_packet(packet);
       }
       av_free(pic);
       return 0;
    }


    int MyApp::audio_thread() {
       //VideoState *is = (VideoState *)arg;
       AVPacket pkt1, *packet = &amp;pkt1;
       int len1, frameFinished;
       ALuint source;
       ALenum format = 0;
       //   ALuint frequency;
       ALenum alError;
       ALint val2;
       ALuint buffers[NUM_BUFFERS];
       int dataSize;


       ALCcontext *aContext;
       ALCdevice *device;
       if (!alutInit(NULL, NULL)) {
           // printf(stderr, "init alut error\n");
       }
       device = alcOpenDevice(NULL);
       if (device == NULL) {
           // printf(stderr, "device error\n");
       }

       //Create a context
       aContext = alcCreateContext(device, NULL);
       alcMakeContextCurrent(aContext);
       if(!(aContext)) {
           printf("Could not create the OpenAL context!\n");
           return 0;
       }

       alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f);









       //ALenum alError;
       if(alGetError() != AL_NO_ERROR) {
           cout &lt;&lt; "could not create buffers";
           cout.flush();
           fflush( stdout );
           return 0;
       }
       alGenBuffers(NUM_BUFFERS, buffers);
       alGenSources(1, &amp;source);
       if(alGetError() != AL_NO_ERROR) {
           cout &lt;&lt; "after Could not create buffers or the source.\n";
           cout.flush(  );
           return 0;
       }

       int i;
       int indexOfPacket;
       double pts;
       //double pts;
       int n;


       for(i = 0; i &lt; NUM_BUFFERS; i++)
       {
           if(packet_queue_get(&amp;audioq, packet, 1) &lt; 0) {
               // means we quit getting packets
               break;
           }
           cout &lt;&lt; "streamindex=audio \n";
           cout.flush(  );
           //printf("before decode  audio\n");
           //fflush( stdout );
           // AVPacket *packet = new AVPacket();//malloc(sizeof(AVPacket*));
           AVFrame *decodedFrame = NULL;
           int gotFrame = 0;
           // AVFrame* decodedFrame;

           if(!decodedFrame) {
               if(!(decodedFrame = avcodec_alloc_frame())) {
                   cout &lt;&lt; "Run out of memory, stop the streaming...\n";
                   fflush( stdout );
                   cout.flush();


                   return -2;
               }
           } else {
               avcodec_get_frame_defaults(decodedFrame);
           }

           int  len = avcodec_decode_audio4(audioc, decodedFrame, &amp;gotFrame, packet);
           if(len &lt; 0) {
               cout &lt;&lt; "Error while decoding.\n";
               cout.flush(  );

               return -3;
           }
           if(len &lt; 0) {
               /* if error, skip frame */
               is->audio_pkt_size = 0;
               //break;
           }
           is->audio_pkt_data += len;
           is->audio_pkt_size -= len;

           pts = is->audio_clock;
           // *pts_ptr = pts;
           n = 2 * is->audio_st->codec->channels;
           is->audio_clock += (double)packet->size/
                   (double)(n * is->audio_st->codec->sample_rate);
           if(gotFrame) {
               cout &lt;&lt; "got audio frame.\n";
               cout.flush(  );
               // We have a buffer ready, send it
               dataSize = av_samples_get_buffer_size(NULL, audioc->channels,
                       decodedFrame->nb_samples, audioc->sample_fmt, 1);

               if(!format) {
                   if(audioc->sample_fmt == AV_SAMPLE_FMT_U8 ||
                           audioc->sample_fmt == AV_SAMPLE_FMT_U8P) {
                       if(audioc->channels == 1) {
                           format = AL_FORMAT_MONO8;
                       } else if(audioc->channels == 2) {
                           format = AL_FORMAT_STEREO8;
                       }
                   } else if(audioc->sample_fmt == AV_SAMPLE_FMT_S16 ||
                           audioc->sample_fmt == AV_SAMPLE_FMT_S16P) {
                       if(audioc->channels == 1) {
                           format = AL_FORMAT_MONO16;
                       } else if(audioc->channels == 2) {
                           format = AL_FORMAT_STEREO16;
                       }
                   }

                   if(!format) {
                       cout &lt;&lt; "OpenAL can&#39;t open this format of sound.\n";
                       cout.flush(  );

                       return -4;
                   }
               }
               printf("albufferdata audio b4.\n");
               fflush( stdout );
               alBufferData(buffers[i], format, *decodedFrame->data, dataSize, decodedFrame->sample_rate);
               cout &lt;&lt; "after albufferdata all buffers \n";
               cout.flush(  );
               av_free_packet(packet);
               //=av_free(packet);
               av_free(decodedFrame);

               if((alError = alGetError()) != AL_NO_ERROR) {
                   printf("Error while buffering.\n");

                   printAlError(alError);
                   return -6;
               }
           }
       }


       cout &lt;&lt; "before quoe buffers \n";
       cout.flush();
       alSourceQueueBuffers(source, NUM_BUFFERS, buffers);
       cout &lt;&lt; "before play.\n";
       cout.flush();
       alSourcePlay(source);
       cout &lt;&lt; "after play.\n";
       cout.flush();
       if((alError = alGetError()) != AL_NO_ERROR) {
           cout &lt;&lt; "error strating stream.\n";
           cout.flush();
           printAlError(alError);
           return 0;
       }


       // AVPacket *pkt = &amp;is->audio_pkt;

       while(keepGoing)
       {
           while(packet_queue_get(&amp;audioq, packet, 1)  >= 0) {
               // means we quit getting packets

               do {
                   alGetSourcei(source, AL_BUFFERS_PROCESSED, &amp;val2);
                   usleep(SLEEP_BUFFERING);
               } while(val2 &lt;= 0);
               if(alGetError() != AL_NO_ERROR)
               {
                   fprintf(stderr, "Error gettingsource :(\n");
                   return 1;
               }

               while(val2--)
               {



                   ALuint buffer;
                   alSourceUnqueueBuffers(source, 1, &amp;buffer);
                   if(alGetError() != AL_NO_ERROR)
                   {
                       fprintf(stderr, "Error unqueue buffers :(\n");
                       //  return 1;
                   }
                   AVFrame *decodedFrame = NULL;
                   int gotFrame = 0;
                   // AVFrame* decodedFrame;

                   if(!decodedFrame) {
                       if(!(decodedFrame = avcodec_alloc_frame())) {
                           cout &lt;&lt; "Run out of memory, stop the streaming...\n";
                           //fflush( stdout );
                           cout.flush();


                           return -2;
                       }
                   } else {
                       avcodec_get_frame_defaults(decodedFrame);
                   }

                   int  len = avcodec_decode_audio4(audioc, decodedFrame, &amp;gotFrame, packet);
                   if(len &lt; 0) {
                       cout &lt;&lt; "Error while decoding.\n";
                       cout.flush(  );
                       is->audio_pkt_size = 0;
                       return -3;
                   }

                   is->audio_pkt_data += len;
                   is->audio_pkt_size -= len;
                   if(packet->size &lt;= 0) {
                       /* No data yet, get more frames */
                       //continue;
                   }


                   if(gotFrame) {
                       pts = is->audio_clock;
                       len = synchronize_audio(is, (int16_t *)is->audio_buf,
                               packet->size, pts);
                       is->audio_buf_size = packet->size;
                       pts = is->audio_clock;
                       // *pts_ptr = pts;
                       n = 2 * is->audio_st->codec->channels;
                       is->audio_clock += (double)packet->size /
                               (double)(n * is->audio_st->codec->sample_rate);
                       if(packet->pts != AV_NOPTS_VALUE) {
                           is->audio_clock = av_q2d(is->audio_st->time_base)*packet->pts;
                       }
                       len = av_samples_get_buffer_size(NULL, audioc->channels,
                               decodedFrame->nb_samples, audioc->sample_fmt, 1);
                       alBufferData(buffer, format, *decodedFrame->data, len, decodedFrame->sample_rate);
                       if(alGetError() != AL_NO_ERROR)
                       {
                           fprintf(stderr, "Error buffering :(\n");
                           return 1;
                       }
                       alSourceQueueBuffers(source, 1, &amp;buffer);
                       if(alGetError() != AL_NO_ERROR)
                       {
                           fprintf(stderr, "Error queueing buffers :(\n");
                           return 1;
                       }
                   }





               }

               alGetSourcei(source, AL_SOURCE_STATE, &amp;val2);
               if(val2 != AL_PLAYING)
                   alSourcePlay(source);

           }


           //pic = avcodec_alloc_frame();
       }
       qDebug() &lt;&lt; "end audiothread";
       return 1;
    }

    void MyApp::refreshSlot()
    {


       if(true)
       {

           printf("got frame %d, %d\n", pic->width, ccontext->width);
           fflush( stdout );

           sws_scale(img_convert_ctx, (const uint8_t **)pic->data, pic->linesize,
                   0, originalVideoHeight, &amp;picrgb->data[0], &amp;picrgb->linesize[0]);

           printf("rescaled frame %d, %d\n", newVideoWidth, newVideoHeight);
           fflush( stdout );
           //av_free_packet(packet);
           //av_init_packet(packet);

           qDebug() &lt;&lt; "waking audio as video finished";
           ////mutex.unlock();
           //mutex2.lock();
           doingVideoFrame = false;
           //doingAudioFrame = false;
           ////mutex2.unlock();


           //mutex2.unlock();
           //w2->wakeAll();
           //w->wakeAll();
           qDebug() &lt;&lt; "now woke audio";

           //pic = picrgb;
           uint8_t *srcy = picrgb->data[0];
           uint8_t *srcu = picrgb->data[1];
           uint8_t *srcv = picrgb->data[2];
           printf("got src yuv frame %d\n", &amp;srcy);
           fflush( stdout );
           unsigned char *ptr = NULL;
           screen_get_buffer_property_pv(mScreenPixelBuffer, SCREEN_PROPERTY_POINTER, (void**) &amp;ptr);
           unsigned char *y = ptr;
           unsigned char *u = y + (newVideoHeight * mStride) ;
           unsigned char *v = u + (newVideoHeight * mStride) / 4;
           int i = 0;
           printf("got buffer  picrgbwidth= %d \n", newVideoWidth);
           fflush( stdout );
           for ( i = 0; i &lt; newVideoHeight; i++)
           {
               int doff = i * mStride;
               int soff = i * picrgb->linesize[0];
               memcpy(&amp;y[doff], &amp;srcy[soff], newVideoWidth);
           }

           for ( i = 0; i &lt; newVideoHeight / 2; i++)
           {
               int doff = i * mStride / 2;
               int soff = i * picrgb->linesize[1];
               memcpy(&amp;u[doff], &amp;srcu[soff], newVideoWidth / 2);
           }

           for ( i = 0; i &lt; newVideoHeight / 2; i++)
           {
               int doff = i * mStride / 2;
               int soff = i * picrgb->linesize[2];
               memcpy(&amp;v[doff], &amp;srcv[soff], newVideoWidth / 2);
           }
           printf("before posttoscreen \n");
           fflush( stdout );

           video_refresh_timer();
           qDebug() &lt;&lt; "end refreshslot";

       }
       else
       {

       }





    }

    void  MyApp::refreshNeededSlot2()
       {
           printf("blitting to buffer");
           fflush(stdout);

           screen_buffer_t screen_buffer;
           screen_get_window_property_pv(mScreenWindow, SCREEN_PROPERTY_RENDER_BUFFERS, (void**) &amp;screen_buffer);
           int attribs[] = { SCREEN_BLIT_SOURCE_WIDTH, newVideoWidth, SCREEN_BLIT_SOURCE_HEIGHT, newVideoHeight, SCREEN_BLIT_END };
           int res2 = screen_blit(mScreenCtx, screen_buffer, mScreenPixelBuffer, attribs);
           printf("dirty rectangles");
           fflush(stdout);
           int dirty_rects[] = { 0, 0, newVideoWidth, newVideoHeight };
           screen_post_window(mScreenWindow, screen_buffer, 1, dirty_rects, 0);
           printf("done screneposdtwindow");
           fflush(stdout);

       }

    void MyApp::video_refresh_timer() {
       testDelay = 0;
       //  VideoState *is = ( VideoState* )userdata;
       VideoPicture *vp;
       //double pts = 0    ;
       double actual_delay, delay, sync_threshold, ref_clock, diff;

       if(is->video_st) {
           if(false)////is->pictq_size == 0)
           {
               testDelay = 1;
               schedule_refresh(is, 1);
           } else {
               // vp = &amp;is->pictq[is->pictq_rindex];

               delay = actualPts - is->frame_last_pts; /* the pts from last time */
               if(delay &lt;= 0 || delay >= 1.0) {
                   /* if incorrect delay, use previous one */
                   delay = is->frame_last_delay;
               }
               /* save for next time */
               is->frame_last_delay = delay;
               is->frame_last_pts = actualPts;

               is->video_current_pts = actualPts;
               is->video_current_pts_time = av_gettime();
               /* update delay to sync to audio */
               ref_clock = get_audio_clock(is);
               diff = actualPts - ref_clock;

               /* Skip or repeat the frame. Take delay into account
        FFPlay still doesn&#39;t "know if this is the best guess." */
               sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD;
               if(fabs(diff) &lt; AV_NOSYNC_THRESHOLD) {
                   if(diff &lt;= -sync_threshold) {
                       delay = 0;
                   } else if(diff >= sync_threshold) {
                       delay = 2 * delay;
                   }
               }
               is->frame_timer += delay;
               /* computer the REAL delay */
               actual_delay = is->frame_timer - (av_gettime() / 1000000.0);
               if(actual_delay &lt; 0.010) {
                   /* Really it should skip the picture instead */
                   actual_delay = 0.010;
               }
               testDelay = (int)(actual_delay * 1000 + 0.5);
               schedule_refresh(is, (int)(actual_delay * 1000 + 0.5));
               /* show the picture! */
               //video_display(is);


               // SDL_CondSignal(is->pictq_cond);
               // SDL_UnlockMutex(is->pictq_mutex);
           }
       } else {
           testDelay = 100;
           schedule_refresh(is, 100);

       }
    }

    void MyApp::schedule_refresh(VideoState *is, int delay) {
       qDebug() &lt;&lt; "start schedule refresh timer" &lt;&lt; delay;
       typeOfEvent = FF_REFRESH_EVENT2;
       w->wakeAll();
       //  SDL_AddTimer(delay,


    }

    I am currently waiting on data in a loop in the following way

    QMutex mutex;
       mutex.lock();
       while(keepGoing)
       {



           qDebug() &lt;&lt; "MAINTHREAD" &lt;&lt; testDelay;


           w->wait(&amp;mutex);
           mutex.unlock();
           qDebug() &lt;&lt; "MAINTHREAD past wait";

           if(!keepGoing)
           {
               break;
           }
           if(testDelay > 0 &amp;&amp; typeOfEvent == FF_REFRESH_EVENT2)
           {
               usleep(testDelay);
               refreshNeededSlot2();
           }
           else   if(testDelay > 0 &amp;&amp; typeOfEvent == FF_QUIT_EVENT2)
           {
               keepGoing = false;
               exit(0);
               break;
               // usleep(testDelay);
               // refreshNeededSlot2();
           }
           qDebug() &lt;&lt; "MAINTHREADend";
           mutex.lock();

       }
       mutex.unlock();

    Please let me know if I need to provide any more relevent code. I'm sorry my code is untidy - I still learning c++ and have been modifying this code for over a week now as previously mentioned.

    Just added a sample of output I'm seeing from print outs I do to console - I can't get my head around it (it's almost too complicated for my level of expertise) but when you see the frames being played and audio playing it's very difficult to give up especially when it took me a couple of weeks to get to this stage.

    Please someone give me a hand if they spot the problem.

    MAINTHREAD past wait
    pts after syncvideo= 1073394046
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.66833
    frame lastpts = 1.63497
    start schedule refresh timer need to delay for 123

    pts after syncvideo= 1073429033
    got frame 640, 640
    MAINTHREAD loop delay before refresh = 123
    start video_refresh_timer
    actualpts = 1.7017
    frame lastpts = 1.66833
    start schedule refresh timer need to delay for 115

    MAINTHREAD past wait
    pts after syncvideo= 1073464021
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.73507
    frame lastpts = 1.7017
    start schedule refresh timer need to delay for 140

    MAINTHREAD loop delay before refresh = 140
    pts after syncvideo= 1073499008
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.76843
    frame lastpts = 1.73507
    start schedule refresh timer need to delay for 163

    MAINTHREAD past wait
    pts after syncvideo= 1073533996
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.8018
    frame lastpts = 1.76843
    start schedule refresh timer need to delay for 188

    MAINTHREAD loop delay before refresh = 188
    pts after syncvideo= 1073568983
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.83517
    frame lastpts = 1.8018
    start schedule refresh timer need to delay for 246

    MAINTHREAD past wait
    pts after syncvideo= 1073603971
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.86853
    frame lastpts = 1.83517
    start schedule refresh timer need to delay for 299

    MAINTHREAD loop delay before refresh = 299
    pts after syncvideo= 1073638958
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.9019
    frame lastpts = 1.86853
    start schedule refresh timer need to delay for 358

    MAINTHREAD past wait
    pts after syncvideo= 1073673946
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.93527
    frame lastpts = 1.9019
    start schedule refresh timer need to delay for 416

    MAINTHREAD loop delay before refresh = 416
    pts after syncvideo= 1073708933
    got frame 640, 640
    start video_refresh_timer
    actualpts = 1.96863
    frame lastpts = 1.93527
    start schedule refresh timer need to delay for 474

    MAINTHREAD past wait
    pts after syncvideo= 1073742872
    got frame 640, 640
    MAINTHREAD loop delay before refresh = 474
    start video_refresh_timer
    actualpts = 2.002
    frame lastpts = 1.96863
    start schedule refresh timer need to delay for 518

    MAINTHREAD past wait
    pts after syncvideo= 1073760366
    got frame 640, 640
    start video_refresh_timer
    actualpts = 2.03537
    frame lastpts = 2.002
    start schedule refresh timer need to delay for 575

  • No audio encoded with ffmpeg using webm/libvorbis

    15 mars 2013, par Craig Lillard

    Having issues getting audio to encode to webm. Tried many different methods and it just ain't happenin. The commands are printed below before each pass.

    I have tried moving the audio commands around, trying different bitrates, different audio commands and have tried it on a couple of different files as well that both have audio.

    Encoding these files to MP4 using x264 causes no problems and works just fine and the audio plays, so it appears to be an issue just with webm. As you can see below, it is a 2-pass encode.

    Thanks for any help you can provide !

    Craig

    Webm LG PASS 1...........................




       webm_pass1: /usr/bin/ffmpeg -i /home/thedirectory/video613268.mov  -codec:v libvpx -quality good -vf &#39;scale=640:360 [scaled];movie=/home/thedirectory/watermarks/w640X360.png [logo];[scaled][logo] overlay&#39; -cpu-used 0 -b:v 500k -aspect 16:9 -qmin 10 -qmax 42 -maxrate:v 500k -bufsize:v 1000k -r:v 25/1 -force_fps -threads 0 -an -acodec libvorbis -ac 2 -ab 96k -ar 44100 -pass 1 -f webm -y /dev/null



       ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
         built on Mar 11 2013 14:48:26 with gcc 4.6.2 20111027 (Red Hat 4.6.2-2)
         configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags=&#39;-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC&#39; --enable-avfilter --enable-libfaac --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab --enable-libvorbis --enable-libvpx
         libavutil      51. 35.100 / 51. 35.100
         libavcodec     53. 61.100 / 53. 61.100
         libavformat    53. 32.100 / 53. 32.100
         libavdevice    53.  4.100 / 53.  4.100
         libavfilter     2. 61.100 /  2. 61.100
         libswscale      2.  1.100 /  2.  1.100
         libswresample   0.  6.100 /  0.  6.100
         libpostproc    52.  0.100 / 52.  0.100
       Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#39;/home/thedirectory/video613268.mov&#39;:
         Metadata:
           major_brand     : qt  
           minor_version   : 537199360
           compatible_brands: qt  
           creation_time   : 2013-02-23 20:04:32
         Duration: 00:00:21.02, start: 0.000000, bitrate: 114326 kb/s
           Stream #0:0(eng): Video: mjpeg (jpeg / 0x6765706A), yuvj422p, 1920x1080 [SAR 72:72 DAR 16:9], 112786 kb/s, 29.97 fps, 29.97 tbr, 2997 tbn, 2997 tbc
           Metadata:
             creation_time   : 2013-02-23 20:04:32
             handler_name    : ?Gestionnaire d?alias Apple
           Stream #0:1(eng): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, 2 channels, s16, 1536 kb/s
           Metadata:
             creation_time   : 2013-02-23 20:04:32
             handler_name    : ?Gestionnaire d?alias Apple
       Incompatible pixel format &#39;yuvj422p&#39; for codec &#39;libvpx&#39;, auto-selecting format &#39;yuv420p&#39;
       [buffer @ 0x1f675a0] w:1920 h:1080 pixfmt:yuvj422p tb:1/1000000 sar:1/1 sws_param:
       [movie @ 0x1f799c0] seek_point:0 format_name:(null) file_name:/home/thedirectory/watermarks/w640X360.png stream_index:0
       [overlay @ 0x1f7c2c0] auto-inserting filter &#39;auto-inserted scale 0&#39; between the filter &#39;Parsed_movie_1&#39; and the filter &#39;Parsed_overlay_2&#39;
       [scale @ 0x1f78d40] w:1920 h:1080 fmt:yuvj422p -> w:640 h:360 fmt:yuv420p flags:0x4
       [scale @ 0x1f7cde0] w:640 h:360 fmt:rgba -> w:640 h:360 fmt:yuva420p flags:0x4
       [overlay @ 0x1f7c2c0] main w:640 h:360 fmt:yuv420p overlay x:0 y:0 w:640 h:360 fmt:yuva420p
       [overlay @ 0x1f7c2c0] main_tb:1/1000000 overlay_tb:1/25 -> tb:1/1000000 exact:1
       [libvpx @ 0x1f77ce0] v1.0.0
       Output #0, webm, to &#39;/dev/null&#39;:
         Metadata:
           major_brand     : qt  
           minor_version   : 537199360
           compatible_brands: qt  
           creation_time   : 2013-02-23 20:04:32
           encoder         : Lavf53.32.100
           Stream #0:0(eng): Video: vp8, yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=10-42, pass 1, 500 kb/s, 1k tbn, 25 tbc
           Metadata:
             creation_time   : 2013-02-23 20:04:32
             handler_name    : ?Gestionnaire d?alias Apple
       Stream mapping:
         Stream #0:0 -> #0:0 (mjpeg -> libvpx)
       Press [q] to stop, [?] for help
       frame=  527 fps= 21 q=0.0 Lsize=       0kB time=00:00:00.00 bitrate=   0.0kbits/s dup=0 drop=103    
       video:0kB audio:0kB global headers:0kB muxing overhead -nan%
       Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)
       Webm LG PASS 2.......................




       webm_pass2: /usr/bin/ffmpeg -i /home/thedirectory/video613268.mov -codec:v libvpx -quality good -vf &#39;scale=640:360 [scaled];movie=/home/thedirectory/watermarks/w640X360.png [logo];[scaled][logo] overlay&#39; -cpu-used 0 -b:v 500k  -aspect 16:9  -qmin 10 -qmax 42 -maxrate:v 500k -bufsize:v 1000k -r:v 24/1 -force_fps -threads 0 -an -acodec libvorbis -ac 2 -ab 96k -ar 44100 -pass 2 -f webm -y /media/amazons3/webmlg/video613268.mov.webm



       ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
         built on Mar 11 2013 14:48:26 with gcc 4.6.2 20111027 (Red Hat 4.6.2-2)
         configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags=&#39;-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC&#39; --enable-avfilter --enable-libfaac --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab --enable-libvorbis --enable-libvpx
         libavutil      51. 35.100 / 51. 35.100
         libavcodec     53. 61.100 / 53. 61.100
         libavformat    53. 32.100 / 53. 32.100
         libavdevice    53.  4.100 / 53.  4.100
         libavfilter     2. 61.100 /  2. 61.100
         libswscale      2.  1.100 /  2.  1.100
         libswresample   0.  6.100 /  0.  6.100
         libpostproc    52.  0.100 / 52.  0.100
       Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#39;/home/thedirectory/video613268.mov&#39;:
         Metadata:
           major_brand     : qt  
           minor_version   : 537199360
           compatible_brands: qt  
           creation_time   : 2013-02-23 20:04:32
         Duration: 00:00:21.02, start: 0.000000, bitrate: 114326 kb/s
           Stream #0:0(eng): Video: mjpeg (jpeg / 0x6765706A), yuvj422p, 1920x1080 [SAR 72:72 DAR 16:9], 112786 kb/s, 29.97 fps, 29.97 tbr, 2997 tbn, 2997 tbc
           Metadata:
             creation_time   : 2013-02-23 20:04:32
             handler_name    : ?Gestionnaire d?alias Apple
           Stream #0:1(eng): Audio: pcm_s16be (twos / 0x736F7774), 48000 Hz, 2 channels, s16, 1536 kb/s
           Metadata:
             creation_time   : 2013-02-23 20:04:32
             handler_name    : ?Gestionnaire d?alias Apple
       Incompatible pixel format &#39;yuvj422p&#39; for codec &#39;libvpx&#39;, auto-selecting format &#39;yuv420p&#39;
       [buffer @ 0x1f2a5a0] w:1920 h:1080 pixfmt:yuvj422p tb:1/1000000 sar:1/1 sws_param:
       [movie @ 0x1f3bec0] seek_point:0 format_name:(null) file_name:/home/thedirectory/watermarks/w640X360.png stream_index:0
       [overlay @ 0x1f3f2c0] auto-inserting filter &#39;auto-inserted scale 0&#39; between the filter &#39;Parsed_movie_1&#39; and the filter &#39;Parsed_overlay_2&#39;
       [scale @ 0x1f3c8a0] w:1920 h:1080 fmt:yuvj422p -> w:640 h:360 fmt:yuv420p flags:0x4
       [scale @ 0x1f3fde0] w:640 h:360 fmt:rgba -> w:640 h:360 fmt:yuva420p flags:0x4
       [overlay @ 0x1f3f2c0] main w:640 h:360 fmt:yuv420p overlay x:0 y:0 w:640 h:360 fmt:yuva420p
       [overlay @ 0x1f3f2c0] main_tb:1/1000000 overlay_tb:1/25 -> tb:1/1000000 exact:1
       [libvpx @ 0x1f3ace0] v1.0.0
       Output #0, webm, to &#39;/media/amazons3/webmlg/video613268.mov.webm&#39;:
         Metadata:
           major_brand     : qt  
           minor_version   : 537199360
           compatible_brands: qt  
           creation_time   : 2013-02-23 20:04:32
           encoder         : Lavf53.32.100
           Stream #0:0(eng): Video: vp8, yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=10-42, pass 2, 500 kb/s, 1k tbn, 24 tbc
           Metadata:
             creation_time   : 2013-02-23 20:04:32
             handler_name    : ?Gestionnaire d?alias Apple
       Stream mapping:
         Stream #0:0 -> #0:0 (mjpeg -> libvpx)
       Press [q] to stop, [?] for help
       frame=  506 fps=  7 q=0.0 Lsize=    1610kB time=00:00:21.08 bitrate= 625.8kbits/s dup=0 drop=124    
       video:1389kB audio:0kB global headers:0kB muxing overhead 15.952140%