Recherche avancée

Médias (91)

Autres articles (102)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (8016)

  • Audio plays to fast in encoded file using ffmpeg

    9 avril 2014, par elk

    I'm encoding video and audio from a Blackmagic device using the DeckLink API.
    I pass the video and audio frames to ffmpeg to create a HLS stream.
    The problem is that the audio is not synced with the video in the generated video files, the audio plays way too fast.

    These two functions are called when new frames arrive.

    -(void)videoFrameArrived:(IDeckLinkVideoInputFrame *)videoFrame {
       void *frameBytes;
       BMDTimeValue frameTime;
       BMDTimeValue frameDuration;

       BOOL hasValidInputSource = (videoFrame->GetFlags() & bmdFrameHasNoInputSource) != 0 ? NO : YES;
       BMDPixelFormat pixelFormat = videoFrame->GetPixelFormat();

       assert(pixelFormat == bmdFormat8BitYUV);
       if(!hasValidInputSource) {
           NSLog(@"invalid input source");
           return;
       }

       if (videoFrame) {
           videoFrame->AddRef();

           AVCodecContext *c;
           c = video_st->codec;

           videoFrame->GetBytes(&frameBytes);
           videoFrame->GetStreamTime(&frameTime, &frameDuration, video_st->time_base.den);

           //Encode the frame
           AVPacket pkt;
           int got_output;
           int size;
           av_init_packet(&pkt);
           pkt.data = NULL;
           pkt.size = 0;

           size = avpicture_fill((AVPicture*)BMPict, (uint8_t *)frameBytes, PIX_FMT_UYVY422, (int)videoFrame->GetWidth(), (int)videoFrame->GetHeight());

           if(size < 0) {
               NSLog(@"Could not fill image");
           }

           //Convert to compatible format
           static struct SwsContext *sws_ctx;
           if (!sws_ctx) {
               if(!sws_isSupportedInput(PIX_FMT_UYVY422) || !sws_isSupportedOutput(pix_fmt)) {
                   NSLog(@"input or output formats not supported by ffmpeg ");
               }

               sws_ctx = sws_getContext((int)videoFrame->GetWidth(), (int)videoFrame->GetHeight(), PIX_FMT_UYVY422, outputWidth, outputHeight, pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
               if (!sws_ctx) {
                   NSLog(@"Could not initialize the conversion context");
                   exit(1);
               }
           }

           sws_scale(sws_ctx, (const uint8_t * const *)BMPict->data, BMPict->linesize, 0, (int)videoFrame->GetHeight(), pict->data, pict->linesize);

           int64_t pts = frameTime / video_st->time_base.num;

           if (initial_video_pts == AV_NOPTS_VALUE) {
               initial_video_pts = pts;
           }
           pts -= initial_video_pts;


           pict->pts = pts;

           if (avcodec_encode_video2(c, &pkt, pict, &got_output) < 0) {
               NSLog(@"Error encoding video frame\n");
               exit(1);
           }
           if (got_output) {
               if (c->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }

               pkt.stream_index = video_st->index;

               if(av_interleaved_write_frame(oc, &pkt) != 0) {
                   NSLog(@"failed to write video frame");
               }
           } else {
               NSLog(@"Got no video packet!");
           }

           videoFrame->Release();
       }
    }

    -(void)audioPacketArrived:(IDeckLinkAudioInputPacket *)audioFrame {
       void *audioFrameBytes;

       if (audioFrame) {
           AVCodecContext *c;
           AVPacket pkt;
           int got_packet;
           BMDTimeValue audio_pts;

           av_init_packet(&pkt);

           c = audio_st->codec;

           int audioFramesize = (int)audioFrame->GetSampleFrameCount() * g_audioChannels * (g_audioSampleDepth / 8);
           audioFrame->GetBytes(&audioFrameBytes);
           audioFrame->GetPacketTime(&audio_pts, audio_st->time_base.den);

           if(avcodec_fill_audio_frame(aFrame, c->channels, c->sample_fmt,  (uint8_t*)audioFrameBytes, audioFramesize, 0) < 0) {
               NSLog(@"Could not fill audio frame!");
           }

           pkt.data = NULL;
           pkt.size = 0;

           uint64_t pts = audio_pts / audio_st->time_base.num;

           if (initial_audio_pts == AV_NOPTS_VALUE) {
               initial_audio_pts = pts;
           }

           pts -= initial_audio_pts;

           aFrame->pts = pts;

           if(avcodec_encode_audio2(c, &pkt, aFrame, &got_packet) != 0) {
               NSLog(@"Failed to encode audio!");
           }
           if(got_packet) {
               pkt.stream_index = audio_st->index;

               if(av_interleaved_write_frame(oc, &pkt) != 0) {
                   NSLog(@"failed to write audio packet!");
               }
           }

       }

    }

    The video streams are setup like this :

    static AVStream *add_audio_stream(AVFormatContext *oc, enum AVCodecID codec_id) {
       AVCodecContext *c;
       AVCodec *codec;
       AVStream *st;


       codec = avcodec_find_encoder(codec_id);
       if (!codec) {
           NSLog(@"audio codec not found");
           exit(1);
       }

       st = avformat_new_stream(oc, codec);
       if (!st) {
           NSLog(@"Could not alloc stream");
           exit(1);
       }

       c             = st->codec;
       c->codec_id   = codec_id;
       c->codec_type = AVMEDIA_TYPE_AUDIO;

       if(codec_id == AV_CODEC_ID_AAC) {
           c->strict_std_compliance = -2;
       }

       c->sample_fmt = sample_fmt;
       c->sample_rate = 48000;
       c->channels    = g_audioChannels;
       if(c->channels == 2) {
           c->channel_layout = AV_CH_LAYOUT_STEREO;
       } else if(c->channels == 1) {
           c->channel_layout = AV_CH_LAYOUT_MONO;
       }

       // some formats want stream headers to be separate
       if (oc->oformat->flags & AVFMT_GLOBALHEADER) {
           NSLog(@"Audio requires global headers");
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (avcodec_open2(c, codec, NULL) < 0) {
           NSLog(@"could not open audio codec");
           exit(1);
       }

       aFrame = av_frame_alloc();

       aFrame->nb_samples     = c->frame_size;
       aFrame->format         = c->sample_fmt;
       aFrame->channel_layout = c->channel_layout;

       return st;
    }

    static AVStream *add_video_stream(AVFormatContext *oc, enum AVCodecID codec_id) {
       AVCodecContext *c;
       AVCodec *codec;
       AVStream *st;


       /* find the video encoder */
       codec = avcodec_find_encoder(codec_id);
       if (!codec) {
           NSLog(@"video codec not found");
           exit(1);
       }

       st = avformat_new_stream(oc, codec);
       if (!st) {
           NSLog(@"Could not alloc stream");
           exit(1);
       }

       c = st->codec;
       avcodec_get_context_defaults3(c, codec);
       c->codec_id = codec_id;

       c->codec_type = AVMEDIA_TYPE_VIDEO;

       AVDictionary *opts = NULL;


       c->width  = outputWidth;
       c->height = outputHeight;

       displayMode->GetFrameRate(&frameRateDuration, &frameRateScale);
       c->time_base.num = (int)frameRateDuration;  
       c->time_base.den = (int)frameRateScale;  

       c->pix_fmt  = pix_fmt;

       if(codec_id == AV_CODEC_ID_H264) {
           av_dict_set(&opts, "preset", "superfast", 0);
           c->profile = FF_PROFILE_H264_MAIN;
           c->max_b_frames = 0;
           c->gop_size = 0;    //0 = Intra only!
           c->qmin = 25; // min quantizer
           c->qmax = 51;
           c->max_qdiff = 4;        
       }
       // some formats want stream headers to be separate
       if (oc->oformat->flags & AVFMT_GLOBALHEADER) {
           NSLog(@"Video requires global headers");
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }


       /* open the codec */
       if (avcodec_open2(c, codec, &opts) < 0) {
           NSLog(@"could not open video codec");
           exit(1);
       }


       BMPict = avcodec_alloc_frame();

       pict = avcodec_alloc_frame(); //Deprecated but works


       if(avpicture_alloc((AVPicture*)BMPict, PIX_FMT_UYVY422, (int)displayMode->GetWidth(), (int)displayMode->GetHeight()) != 0) {
           NSLog(@"Could not allocate picture bmpict!");
       }
       if(avpicture_alloc((AVPicture*)pict, pix_fmt, outputWidth, outputHeight) != 0) {
           NSLog(@"Could not allocate picture pict!");
       }


       return st;
    }

    The pts for audio and video both begin with 0 and increase with 3600 because both streams timescale is 90000.

    What have I missed ?

  • libx264 encoder video plays too fast

    23 avril 2014, par Nick

    I’m trying to write a program that uses libx264 to encode the video frames. I’ve wrapped this code into a small class (see below). I have frames that are in YUV420 format. libx264 encodes the frames and I save them to a file. I can play the file back in VLC, all of the frames are there, but it plays back at several hundred times the actual frame rate. Currently I am capturing frames at 2.5 FPS, but they play back as if it was recorded at 250 or more FPS. I’ve tried to change the frame rate with no luck.

    I’ve also tried to set

    _param.b_vfr_input = 1

    and then set the time bases appropriately, but that causes my program to crash. Any ideas ? My encode code is shown below. I’ve also included the output of ffprobe -show_frames

    Wrapper Class :

    x264wrapper::x264wrapper(int width, int height, int fps, int timeBaseNum, int timeBaseDen, int vfr)
    {
       x264_param_default_preset(&_param, "veryfast", "zerolatency");
       _param.i_threads = 1;
       _param.i_width = width;
       _param.i_height = height;
       _param.i_fps_num = fps;
       _param.i_fps_den = 1;
       // Intra refres:
       _param.i_keyint_max = fps;
       _param.b_intra_refresh = 1;
       //Rate control:
       _param.rc.i_rc_method = X264_RC_CRF;
       //_param.rc.i_rc_method = X264_RC_CQP;
       _param.rc.f_rf_constant = 25;
       _param.rc.f_rf_constant_max = 35;
       //For streaming:
       _param.b_repeat_headers = 1;
       _param.b_annexb = 1;    
       // misc
       _param.b_vfr_input = vfr;
       _param.i_timebase_num = timeBaseNum;
       _param.i_timebase_den = timeBaseDen;

       _param.i_log_level = X264_LOG_DEBUG;

       _encoder = x264_encoder_open(&_param);

       cout << "Timebase " << _param.i_timebase_num << "/" << _param.i_timebase_den << endl;
       cout << "fps " << _param.i_fps_num << "/" << _param.i_fps_den << endl;
       _ticks_per_frame = (int64_t)_param.i_timebase_den * _param.i_fps_den / _param.i_timebase_num / _param.i_fps_num;
       cout << "ticks_per_frame " << _ticks_per_frame << endl;
       int result = x264_picture_alloc(&_pic_in, X264_CSP_I420, width, height);
       if (result != 0)
       {
           cout << "Failed to allocate picture" << endl;
           throw(1);
       }

       _ofs = new ofstream("output.h264", ofstream::out | ofstream::binary);
       _pts = 0;
    }


    x264wrapper::~x264wrapper(void)
    {
       _ofs->close();
    }



    void x264wrapper::encode(uint8_t * buf)
    {
       x264_nal_t* nals;
       int i_nals;
       convertFromBalserToX264(buf);
       _pts += _ticks_per_frame;
       _pic_in.i_pts = _pts;
       x264_picture_t pic_out;
       int frame_size = x264_encoder_encode(_encoder, &nals, &i_nals, &_pic_in, &pic_out);
       if (frame_size >= 0)
       {
           _ofs->write((char*)nals[0].p_payload, frame_size);
       }
       else
       {
           cout << "error: x264_encoder_encode failed" << endl;
       }
    }

    Output of ffprobe -show_frames :

    [FRAME]
    media_type=video
    key_frame=1
    pkt_pts=N/A
    pkt_pts_time=N/A
    pkt_dts=N/A
    pkt_dts_time=N/A
    pkt_duration=48000
    pkt_duration_time=0.040000
    pkt_pos=0
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=I
    coded_picture_number=0
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    reference=0
    [/FRAME]
    [FRAME]
    media_type=video
    key_frame=0
    pkt_pts=N/A
    pkt_pts_time=N/A
    pkt_dts=N/A
    pkt_dts_time=N/A
    pkt_duration=N/A
    pkt_duration_time=N/A
    pkt_pos=54947
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=P
    coded_picture_number=1
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    reference=0
    [/FRAME]
    [FRAME]
    media_type=video
    key_frame=0
    pkt_pts=N/A
    pkt_pts_time=N/A
    pkt_dts=N/A
    pkt_dts_time=N/A
    pkt_duration=N/A
    pkt_duration_time=N/A
    pkt_pos=57899
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=P
    coded_picture_number=2
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    reference=0
    [/FRAME]
  • How to re-encode a 120fps (or higher) MP4 from my Samsung Galaxy to a 30fps (example) so it plays on Windows like it does on the Samsung

    21 septembre 2022, par remark70

    I have some high fps slow-motion videos (mp4) that I've recorded on my phone, but when I copy them to windows and play them back, they play at the normal speed (or they play really fast), unless I slow down playback but this isn't a good result.

    


    What I'd like to do is re-encode (if that's the right word) the video to a standard fps (such as 30) to get a longer video (keeping all the frames), i.e. a 10 second 120fps would end up being a 40 second video at 30fps.