Recherche avancée

Médias (91)

Autres articles (87)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (11955)

  • Issues with android-ffmpeg on Android Lollipop

    21 mai 2015, par Unnati

    I am working on an application which mixes audio and video.

    I am following android-ffmpeg guardianproject to solve my purpose. The issue is that it works fine till Android Kitkat. But the process fails on Android Lollipop.

    Here is my code to run the process

    ProcessBuilder pb = new ProcessBuilder(cmd);
    pb.directory(fileExec);

    //  pb.redirectErrorStream(true);
    Process process = pb.start();    

    // any error message?
    StreamGobbler errorGobbler = new
    StreamGobbler(process.getErrorStream(), "ERROR", sc);            

    // any output?
    StreamGobbler outputGobbler = new
    StreamGobbler(process.getInputStream(), "OUTPUT", sc);

    // kick them off
    errorGobbler.start();
    outputGobbler.start();

    int exitVal = process.waitFor();
    sc.processComplete(exitVal);
    return exitVal;

    How can i solve this for Lollipop ? Are there any additional files that i should include for solving this for Lollipop ?

  • Upcoming conferences / workshops

    10 septembre 2010, par silvia

    Lots is happening in open source multimedia land in the next few months. Check out these cool upcoming conferences / workshops / miniconfs… September 29th and 30th, New York Open Subtitles Design Summit October 1st and 2nd, New York Open Video Conference October 3rd and 4th, New York Foundations (...)

  • Mix video and audio to mp4 file with ffmpeg but audio does't keep step with video when playback

    28 juillet 2015, par dragonfly

    I managed to write a program to record video(h264/aac) on android with ffmpeg. The detail is as follows :

    1. Implement android.hardware.Camera.PreviewCallback to capture every frame from camera (yuv image) and send it to the ffmpeg in the jni layer.

      @Override
      public void onPreviewFrame(byte[] data, Camera camera) {
         // Log.d(TAG, "onPreviewFrame");
         if (mIsRecording) {
             // Log.d(TAG, "handlePreviewFrame");
             Parameters param = camera.getParameters();
             Size s = param.getPreviewSize();
             handlePreviewFrame(data, s.width, s.height, mBufSize);
         }
         camera.addCallbackBuffer(mPreviewBuffer);
      }


      private void handlePreviewFrame(byte[] data, int width, int height, int size) {

         if (mFormats == ImageFormat.NV21) {
                 //process the yuv data

         }

         synchronized (mMuxer) {
             //jni api
             mMuxer.putAudioVisualData(mYuvData, size, 0);
         }
      }
    2. Use android.media.AudioRecord to read the pcm data from the microphone, write pcm data to ffmpeg in the jni layer in a loop.

      while (this.isRecording) {
         int ret = audioRecord.read(tempBuffer, 0, 1024);

         if (ret == AudioRecord.ERROR_INVALID_OPERATION) {
             throw new IllegalStateException(
                     "read() returned AudioRecord.ERROR_INVALID_OPERATION");
         } else if (ret == AudioRecord.ERROR_BAD_VALUE) {
             throw new IllegalStateException("read() returned AudioRecord.ERROR_BAD_VALUE");
         } else if (ret == AudioRecord.ERROR_INVALID_OPERATION) {
             throw new IllegalStateException(
                     "read() returned AudioRecord.ERROR_INVALID_OPERATION");
         }

         // 处理数据
         handleAudioData(tempBuffer, ret);
      }

      private void handleAudioData(short[] data, int size)
      {
         // convert to byte[]
         //Log.d("VideoCaptureActivity", "handleAudioData");
         ByteBuffer buffer = ByteBuffer.allocate(data.length * 2);
         buffer.order(ByteOrder.LITTLE_ENDIAN);
         buffer.asShortBuffer().put(data);
         buffer.limit(size * 2);
         byte[] bytes = buffer.array();
         synchronized (muxing) {
         Log.d(TAG, "putAudio Data :" + size*2);
         muxing.putAudioVisualData(bytes, size * 2, 1);
         }
      }
    3. mix audio and video data in the jni layer. I refer to the example : https://ffmpeg.org/doxygen/trunk/muxing_8c-source.html

    The problem is that the example demonstrates audio and video encoding from some dummy source data generated on the fly. I need to encode audio from microphone and video from camera.

    I think the reason of my failure is that the pts in the expample is not applicable for my situation. my av function code is as follows :

    static int write_video_frame(AVFormatContext *oc, OutputStream *ost, char *data,
           int size) {
       int ret;
       AVCodecContext *c;
       int got_packet = 0;

       c = ost->st->codec;

       AVPacket pkt = { 0 };
       av_init_packet(&pkt);

       if (!video_st.hwcodec) {
           if (ost->zoom) {
               zoom(oc, ost, data);
           } else {
               avpicture_fill((AVPicture*) ost->frame, (const uint8_t *) data,
                       c->pix_fmt, c->width, c->height);
           }
           av_frame_make_writable(ost->frame);
           //ost->frame->pts = ost->next_pts++;
           ost->frame->pts = frame_count;
           /* encode the image */
           //ALOGI("avcodec_encode_video2 start");
           ret = avcodec_encode_video2(c, &pkt, ost->frame, &got_packet);
           //ALOGI("avcodec_encode_video2 end");
           if (ret < 0) {
               ALOGE("Error encoding video frame: %s", av_err2str(ret));
               return -1;
           }
       } else {
           if (size != 0) {
               pkt.data = (uint8_t *) data;
               pkt.size = size;
               pkt.pts = pkt.dts = ost->next_pts++;
               got_packet = 1;
           }
       }

       if (got_packet) {
           //ALOGI("video write_frame start");
           //pkt.pts = (int) timestamp;
           ret = write_frame(oc, &c->time_base, ost->st, &pkt);
           //ALOGI("video write_frame end");
           if (ret < 0) {
               ALOGE("Error while writing video frame: %s", av_err2str(ret));
               return -1;
           }
       }
       frame_count++;
       return 0;
    }





    static int write_audio_frame(AVFormatContext *oc, OutputStream *ost, char *data) {
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       AVFrame *frame;
       int ret;
       int got_packet;
       int dst_nb_samples;

       av_init_packet(&pkt);
       c = ost->st->codec;

       if (audio_st.speex_echo_cancellation == 1
               && g_audio_echo_play_queue->start_flag == 1) {
           //ALOGI("encode_audio_handler in echo_cancel");
           QUEUE_ITEM* item = Get_Queue_Item(g_audio_echo_play_queue);
           if (item) {
               speex_dsp_echo_play_back((spx_int16_t *) item->data);
               //ALOGI("encode_audio_handler echo_play begin speex_echo_play_back");
               short *echo_processed = (short *) av_malloc(160 * sizeof(short));
               speex_dsp_echo_capture((spx_int16_t *) data, echo_processed);
               memcpy(data, (uint8_t *) echo_processed, 160);
               av_free(echo_processed);
               Free_Queue_Item(item, 1);
           }
       }

       frame = ost->tmp_frame;
       //update pts
       //frame->pts = ost->next_pts;
       //ost->next_pts += frame->nb_samples;
       if (frame) {
           /* convert samples from native format to destination codec format, using the resampler */
           /* compute destination number of samples */
           dst_nb_samples = av_rescale_rnd(
                   swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
                   c->sample_rate, c->sample_rate, AV_ROUND_UP);

           memcpy(frame->data[0], data, frame->nb_samples * 2);
           //frame->data[0] = data;

           /* when we pass a frame to the encoder, it may keep a reference to it
            * internally;
            * make sure we do not overwrite it here
            */
           ret = av_frame_make_writable(ost->frame);
           if (ret < 0) {
               ALOGE("write_audio_frame av_frame_make_writable ERROR %s",
                       av_err2str(ret));
               return -1;
           }

           /* convert to destination format */
           ret = swr_convert(ost->swr_ctx, ost->frame->data, dst_nb_samples,
                   (const uint8_t **) frame->data, frame->nb_samples);

           if (ret < 0) {
               ALOGI("Error while converting %s", av_err2str(ret));
               return -1;
           }
           frame = ost->frame;
           frame->pts = av_rescale_q(ost->samples_count,
                   (AVRational ) { 1, c->sample_rate }, c->time_base);
           ost->samples_count += dst_nb_samples;
       }
       ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);

       if (ret < 0) {
           ALOGE("Error encoding audio frame: %s", av_err2str(ret));
           return -1;
       }

       if (got_packet) {
           //pkt.pts = (int) timestamp;

           ret = write_frame(oc, &c->time_base, ost->st, &pkt);
           if (ret < 0) {
               ALOGE("Error while writing audio frame: %s", av_err2str(ret));
               return -1;
           }
       }
       return (frame || got_packet) ? 0 : 1;
    }

    How do I deal with the pts of video and audio stream for my situation ? Who can give me some advice ?

    Can I ignore the pts provided by ffmpeg and calculate the pts in the java layer by myself and transmit it to ffmpeg ?