Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (111)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (11541)

  • Theatrical quality ffmpeg/x264 encoding of a high-motion 1080p video

    2 décembre 2011, par Ian

    I've been struggling with encoding videos using FFMPEG and x264. The output stutters when played back in Quicktime, while in VLC it shows a lot of compression artifacts at the same places Quicktime stutters. So it seems like Quicktime is stuttering because it's trying to suppress the corruption/artifacts.

    The videos have a lot of random motion in them, including frames where 75% of the pixels will change at a random interval (the video is software generated so it's truly pseudo-random). The compression seems to be choking in these places where it's likely detecting a "scene cut" incorrectly. It also seems to choke at regular intervals where I guess it's doing a keyframe.

    I've based my encoding preset off of the x264-hq preset that comes with FFMPEG. I've tried turning off scene cut detection, and playing with the keyint/g and keyint_min options. Setting g to 1 makes it work, but blows out the filesize. I've tried the lossless presets, but they won't playback at all in Quicktime. Oddly, I haven't had any problems when working with a lower-resolution test video (1440x810).

    Here's the preset I have right now, which works, but yields a file that's approximately 60% larger than the (non-working) hq preset yields. Is there any way to improve upon this ? The filesize doesn't matter much, I just want something that will playback anywhere and be very high quality.

    coder=1
    flags=+loop
    cmp=+chroma
    partitions=+parti8x8+parti4x4+partp8x8+partp4x4+partb8x8
    me_method=umh
    subq=8
    me_range=16
    g=1
    keyint_min=1
    sc_threshold=0
    i_qfactor=0.71
    b_strategy=1crf=20
    qcomp=0.6
    qmin=20
    qmax=51
    qdiff=4
    bf=16
    refs=4
    trellis=1
    flags2=+dct8x8+wpred+bpyramid+mixed_refs
    wpredp=2
    

    Here's the command :

    ffmpeg \
      -r 60 -i "frame-%06d.tiff" \
      -vcodec libx264 -vpre my_preset \
      -threads 0 \
      -r 60 -an -f out.mp4
    
  • FFMPEG text file [migrated]

    23 décembre 2012, par user872387

    I am giving the input file in the command line like this :

    ffmpeg -i catch.flv.

    The information about the file such as resolution, frame rate, bit rate etc is displayed in the terminal. I want this information to be stored as a (.txt) file. I have tried ffmpeg -i catch.flv > catch.txt. The .txt file was created successfully but the information is not stored in it. Could someone please help ?

  • How to write NALs produced by x264_encoder_encode() using ffmpeg av_interleaved_write_frame()

    21 janvier 2013, par Haleeq Usman

    I have been trying to produce a "flv" video file in the following sequence :

    av_register_all();

    // Open video file
    if (avformat_open_input(&pFormatCtx, "6.mp4", NULL, NULL) != 0)
       return -1; // Couldn't open file

    // Retrieve stream information
    if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
       return -1; // Couldn't find stream information

    // Dump information about file onto standard error
    av_dump_format(pFormatCtx, 0, "input_file.mp4", 0);

    // Find the first video stream
    videoStream = -1;
    for (i = 0; i < pFormatCtx->nb_streams; i++)
       if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
           videoStream = i;
           break;
       }
    if (videoStream == -1)
       return -1; // Didn't find a video stream

    // Get a pointer to the codec context for the video stream
    pCodecCtx = pFormatCtx->streams[videoStream]->codec;

    // Find the decoder for the video stream
    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    if (pCodec == NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
    }
    // Open codec
    if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
       return -1; // Could not open codec

    // Allocate video frame
    pFrame = avcodec_alloc_frame();

    // Allocate video frame
    pFrame = avcodec_alloc_frame();

    // Allocate an AVFrame structure
    pFrameYUV420 = avcodec_alloc_frame();
    if (pFrameYUV420 == NULL)
       return -1;

    // Determine required buffer size and allocate buffer
    numBytes = avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
    buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));

    // Assign appropriate parts of buffer to image planes in pFrameYUV420
    // Note that pFrameYUV420 is an AVFrame, but AVFrame is a superset of AVPicture
    avpicture_fill((AVPicture *) pFrameRGB, buffer, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);

    // Setup scaler
    img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, SWS_BILINEAR, 0, 0, 0);
    if (img_convert_ctx == NULL) {
       fprintf(stderr, "Cannot initialize the conversion context!\n");
       exit(1);
    }

    // Setup encoder/muxing now
    filename = "output_file.flv";
    fmt = av_guess_format("flv", filename, NULL);
    if (fmt == NULL) {
       printf("Could not guess format.\n");
       return -1;
    }
    /* allocate the output media context */
    oc = avformat_alloc_context();
    if (oc == NULL) {
       printf("could not allocate context.\n");
       return -1;
    }
    oc->oformat = fmt;
    snprintf(oc->filename, sizeof(oc->filename), "%s", filename);

    video_st = NULL;
    if (fmt->video_codec != AV_CODEC_ID_NONE) {
       video_st = add_stream(oc, &video_codec, fmt->video_codec);
    }

    // Let's see some information about our format
    av_dump_format(oc, 0, filename, 1);

    /* open the output file, if needed */
    if (!(fmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
       if (ret < 0) {
           fprintf(stderr, "Could not open '%s': %s\n", filename, av_err2str(ret));
           return 1;
       }
       }
    /* Write the stream header, if any. */
    ret = avformat_write_header(oc, NULL);
    if (ret < 0) {
       fprintf(stderr, "Error occurred when opening output file: %s\n", av_err2str(ret));
       return 1;
    }

    // Setup x264 params
    x264_param_t param;
    x264_param_default_preset(&param, "veryfast", "zerolatency");
    param.i_threads = 1;
    param.i_width = video_st->codec->width;
    param.i_height = video_st->codec->height;
    param.i_fps_num = STREAM_FRAME_RATE; // 30 fps, same as video
    param.i_fps_den = 1;
    // Intra refres:
    param.i_keyint_max = STREAM_FRAME_RATE;
    param.b_intra_refresh = 1;
    // Rate control:
    param.rc.i_rc_method = X264_RC_CRF;
    param.rc.f_rf_constant = 25;
    param.rc.f_rf_constant_max = 35;
    // For streaming:
    param.b_repeat_headers = 1;
    param.b_annexb = 1;
    x264_param_apply_profile(&param, "baseline");

    x264_t* encoder = x264_encoder_open(&param);
    x264_picture_t pic_in, pic_out;
    x264_picture_alloc(&pic_in, X264_CSP_I420, video_st->codec->width, video_st->codec->height);

    x264_nal_t* nals;
    int i_nals;

    // The loop:
    // 1. Read frames
    // 2. Decode the frame
    // 3. Attempt to re-encode using x264
    // 4. Write the x264 encoded frame using av_interleaved_write_frame
    while (av_read_frame(pFormatCtx, &packet) >= 0) {
       // Is this a packet from the video stream?
       if (packet.stream_index == videoStream) {
           // Decode video frame
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           // Did we get a video frame?
           if (frameFinished) {
               sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pic_in.img.plane, pic_in.img.i_stride);
               int frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);

               if (frame_size >= 0) {
                   if (i_nals < 0)
                       printf("invalid frame size: %d\n", i_nals);
                   // write out NALs
                   for (i = 0; i < i_nals; i++) {
                       // initalize a packet
                       AVPacket p;
                       av_init_packet(&p);
                       p.data = nals[i].p_payload;
                       p.size = nals[i].i_payload;
                       p.stream_index = video_st->index;
                       p.flags = AV_PKT_FLAG_KEY;
                       p.pts = AV_NOPTS_VALUE;
                       p.dts = AV_NOPTS_VALUE;
                       ret = av_interleaved_write_frame(oc, &p);
                   }
               }
               printf("encoded frame #%d\n", frame_count);
               frame_count++;
           }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
    }

    // Now we free up resources used/close codecs, and finally close our program.

    Here is the implementation for the add_stream() function :

    /* Add an output stream. */
    static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec, enum AVCodecID codec_id) {
       AVCodecContext *c;
       AVStream *st;
       int r;
       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(codec_id));
           exit(1);
       }
       st = avformat_new_stream(oc, *codec);
       if (!st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       st->id = oc->nb_streams - 1;
       c = st->codec;
       switch ((*codec)->type) {
       case AVMEDIA_TYPE_AUDIO:
           st->id = 1;
           c->sample_fmt = AV_SAMPLE_FMT_FLTP;
           c->bit_rate = 64000;
           c->sample_rate = 44100;
           c->channels = 2;
           break;
       case AVMEDIA_TYPE_VIDEO:
           avcodec_get_context_defaults3(c, *codec);
           c->codec_id = codec_id;
           c->bit_rate = 500*1000;
           //c->rc_min_rate = 500*1000;
           //c->rc_max_rate = 500*1000;
           //c->rc_buffer_size = 500*1000;
           /* Resolution must be a multiple of two. */
           c->width = 1280;
           c->height = 720;
           /* timebase: This is the fundamental unit of time (in seconds) in terms
            * of which frame timestamps are represented. For fixed-fps content,
            * timebase should be 1/framerate and timestamp increments should be
            * identical to 1. */
           c->time_base.den = STREAM_FRAME_RATE;
           c->time_base.num = 1;
           c->gop_size = 12; /* emit one intra frame every twelve frames at most */
           c->pix_fmt = STREAM_PIX_FMT;
           if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
               /* just for testing, we also add B frames */
               c->max_b_frames = 2;
           }
           if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
               /* Needed to avoid using macroblocks in which some coeffs overflow.
                * This does not happen with normal video, it just happens here as
                * the motion of the chroma plane does not match the luma plane. */
               c->mb_decision = 2;
           }
           break;
       default:
           break;
       }
       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;
       return st;
    }

    After the encoding is complete, I check the output file output_file.flv. I notice it's size is very large : 101MB and it does not play. If I use ffmpeg to decode/encode the input file, then I get an output file about 83MB in size (which is about the same size as the original .mp4 file used as input). Also, the 83MB output from just using ffmpeg C api, as opposed to using x264 for the encoding step, plays just fine. Does anyone know where I am going wrong ? I have tried researching this for a few days now but with no luck :(. I feel that I am close to making it work, however, I just cannot figure out what I am doing wrong. Thank you !