Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (42)

Sur d’autres sites (5941)

  • Need to get audio video structure for automation ffmpeg or ffprobe

    10 mai 2012, par Sapan Doshi

    I need to get video audio stream information in separate form.

    I have tried ffprobe and mediainfo, but they give full information which needs to be parsed.

    I think there could be option like, where I do not need to parse the full information and get required data.

    $ffprobe -XXX
    audio channels 8

    $ffprobe -YYY
    video_resolution 512x288

    Can anybody help with this.

  • How the seek frame while playing video ?

    16 mars 2012, par sherman

    I try do this :

    void Java_ffvideo_company_com_NativeCalls_seekFrame(
           JNIEnv *env, jobject this, jint time) {


       if (avformat_seek_file(pFormatCtx, videoStream, 0,time,time, AVSEEK_FLAG_ANY) < 0)
           __android_log_print(ANDROID_LOG_DEBUG, "ERROR SEEK->>>",
                   "av_seek_frame failed.");

       avcodec_flush_buffers(pCodecCtx);
    }

    But the application crashes. Without avcodec_flush_buffers video seeking not correctly. How I can resolve this isssue ?

  • Encode video using ffmpeg from javacv on Android causes native code crash

    13 décembre 2012, par gtcompscientist

    NOTE : I have updated this since originally asking the question to reflect some of what I have learned about loading live camera images into the ffmpeg libraries.

    I am using ffmpeg from javacv compiled for Android to encode/decode video for my application. (Note that originally, I was trying to use ffmpeg-java, but it has some incompatible libraries)

    Original problem : The problem that I've run into is that I am currently getting each frame as a Bitmap (just a plain android.graphics.Bitmap) and I can't figure out how to stuff that into the encoder.

    Solution in javacv's ffmpeg : Use avpicture_fill(), the format from Android is supposedly YUV420P, though I can't verify this until my encoder issues (below) are fixed.

    avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)

    Problem Now : The line that is supposed to actually encode the data crashes the thread. I get a big native code stack trace that I'm unable to understand. Does anybody have a suggestion ?

    Here is the code that I am using to instantiate all the ffmpeg libraries :

       avcodec.avcodec_register_all();
       avcodec.avcodec_init();
       avformat.av_register_all();

       mCodec = avcodec.avcodec_find_encoder(avcodec.CODEC_ID_H263);
       if (mCodec == null)
       {
           Logging.Log("Unable to find encoder.");
           return;
       }
       Logging.Log("Found encoder.");

       mCodecCtx = avcodec.avcodec_alloc_context();
       mCodecCtx.bit_rate(300000);
       mCodecCtx.codec(mCodec);
       mCodecCtx.width(VIDEO_WIDTH);
       mCodecCtx.height(VIDEO_HEIGHT);
       mCodecCtx.pix_fmt(avutil.PIX_FMT_YUV420P);
       mCodecCtx.codec_id(avcodec.CODEC_ID_H263);
       mCodecCtx.codec_type(avutil.AVMEDIA_TYPE_VIDEO);
       AVRational ratio = new AVRational();
       ratio.num(1);
       ratio.den(30);
       mCodecCtx.time_base(ratio);
       mCodecCtx.coder_type(1);
       mCodecCtx.flags(mCodecCtx.flags() | avcodec.CODEC_FLAG_LOOP_FILTER);
       mCodecCtx.me_cmp(avcodec.FF_LOSS_CHROMA);
       mCodecCtx.me_method(avcodec.ME_HEX);
       mCodecCtx.me_subpel_quality(6);
       mCodecCtx.me_range(16);
       mCodecCtx.gop_size(30);
       mCodecCtx.keyint_min(10);
       mCodecCtx.scenechange_threshold(40);
       mCodecCtx.i_quant_factor((float) 0.71);
       mCodecCtx.b_frame_strategy(1);
       mCodecCtx.qcompress((float) 0.6);
       mCodecCtx.qmin(10);
       mCodecCtx.qmax(51);
       mCodecCtx.max_qdiff(4);
       mCodecCtx.max_b_frames(1);
       mCodecCtx.refs(2);
       mCodecCtx.directpred(3);
       mCodecCtx.trellis(1);
       mCodecCtx.flags2(mCodecCtx.flags2() | avcodec.CODEC_FLAG2_BPYRAMID | avcodec.CODEC_FLAG2_WPRED | avcodec.CODEC_FLAG2_8X8DCT | avcodec.CODEC_FLAG2_FASTPSKIP);

       if (avcodec.avcodec_open(mCodecCtx, mCodec) == 0)
       {
           Logging.Log("Unable to open encoder.");
           return;
       }
       Logging.Log("Encoder opened.");

       mFrameSize = avcodec.avpicture_get_size(avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT);
       Logging.Log("Frame size - '" + mFrameSize + "'.");
       //mPic = new AVPicture(mPicSize);
       mFrame = avcodec.avcodec_alloc_frame();
       if (mFrame == null)
       {
           Logging.Log("Unable to alloc frame.");
       }

    This is what I want to be able to execute next :

       BytePointer picPointer = new BytePointer(data);
       int bBuffSize = mFrameSize;

       BytePointer bBuffer = new BytePointer(bBuffSize);

       int picSize = 0;
       if ((picSize = avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)) <= 0)
       {
           Logging.Log("Couldn't convert preview to AVPicture (" + picSize + ")");
           return;
       }
       Logging.Log("Converted preview to AVPicture (" + picSize + ")");

       VCAP_Package vPackage = new VCAP_Package();

       if (mCodecCtx.isNull())
       {
           Logging.Log("Codec Context is null!");
       }

       //encode the image
       int size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, mFrame);

       int totalSize = 0;
       while (size >= 0)
       {
           totalSize += size;
           Logging.Log("Encoded '" + size + "' bytes.");
           //Get any delayed frames
           size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, null);
       }
       Logging.Log("Finished encoding. (" + totalSize + ")");

    But, as of now, I don't know how to put the Bitmap into the right piece or if I have that setup correctly.

    A few notes about the code :
    - VIDEO_WIDTH = 352
    - VIDEO_HEIGHT = 288
    - VIDEO_FPS = 30 ;