Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (86)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

Sur d’autres sites (8074)

  • The encoding of ffmpeg does not work on iOS

    25 mai 2017, par Deric

    I would like to send encoded streaming encoded using ffmpeg.
    The encoding transfer developed under the source below does not work.
    Encoding Before packet operation with vlc player is done well, encoded packets can not operate.
    I do not know what’s wrong.
    Please help me.

    av_register_all();
    avformat_network_init();
    AVOutputFormat *ofmt = NULL;
    //Input AVFormatContext and Output AVFormatContext
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket pkt;
    //const char *in_filename, *out_filename;
    int ret, i;
    int videoindex=-1;
    int frame_index=0;
    int64_t start_time=0;

    av_register_all();
    //Network
    avformat_network_init();
    //Input
    if ((ret = avformat_open_input(&ifmt_ctx, "rtmp://", 0, 0)) < 0) {
       printf( "Could not open input file.");
    }
    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
       printf( "Failed to retrieve input stream information");
    }


    AVCodecContext *context = NULL;

    for(i=0; inb_streams; i++) {
       if(ifmt_ctx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){

           videoindex=i;

           AVCodecParameters *params = ifmt_ctx->streams[i]->codecpar;
           AVCodec *codec = avcodec_find_decoder(params->codec_id);
           if (codec == NULL)  { return; };

           context = avcodec_alloc_context3(codec);

           if (context == NULL) { return; };

           ret = avcodec_parameters_to_context(context, params);
           if(ret < 0){
               avcodec_free_context(&context);
           }

           context->framerate = av_guess_frame_rate(ifmt_ctx, ifmt_ctx->streams[i], NULL);

           ret = avcodec_open2(context, codec, NULL);
           if(ret < 0) {
               NSLog(@"avcodec open2 error");
               avcodec_free_context(&context);
           }

           break;
       }
    }
    av_dump_format(ifmt_ctx, 0, "rtmp://", 0);

    //Output

    avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", "rtmp://"); //RTMP
    //avformat_alloc_output_context2(&ofmt_ctx, NULL, "mpegts", out_filename);//UDP

    if (!ofmt_ctx) {
       printf( "Could not create output context\n");
       ret = AVERROR_UNKNOWN;
    }
    ofmt = ofmt_ctx->oformat;
    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
       //Create output AVStream according to input AVStream
       AVStream *in_stream = ifmt_ctx->streams[i];
       AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
       if (!out_stream) {
           printf( "Failed allocating output stream\n");
           ret = AVERROR_UNKNOWN;
       }

       out_stream->time_base = in_stream->time_base;

       //Copy the settings of AVCodecContext
       ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
       if (ret < 0) {
           printf( "Failed to copy context from input to output stream codec context\n");
       }

       out_stream->codecpar->codec_tag = 0;
       if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
           out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }
    }
    //Dump Format------------------
    av_dump_format(ofmt_ctx, 0, "rtmp://", 1);
    //Open output URL
    if (!(ofmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&ofmt_ctx->pb, "rtmp://", AVIO_FLAG_WRITE);
       if (ret < 0) {
           printf( "Could not open output URL ");
      }
    }
    //Write file header
    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
       printf( "Error occurred when opening output URL\n");
    }

    // Encoding
    AVCodec *codec;
    AVCodecContext *c;

    AVStream *video_st = avformat_new_stream(ofmt_ctx, 0);
    video_st->time_base.num = 1;
    video_st->time_base.den = 25;

    if(video_st == NULL){
       NSLog(@"video stream error");
    }


    codec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if(!codec){
       NSLog(@"avcodec find encoder error");
    }

    c = avcodec_alloc_context3(codec);
    if(!c){
       NSLog(@"avcodec alloc context error");
    }


    c->profile = FF_PROFILE_H264_BASELINE;
    c->width = ifmt_ctx->streams[videoindex]->codecpar->width;
    c->height = ifmt_ctx->streams[videoindex]->codecpar->height;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->bit_rate = 800000;
    //c->time_base = { 1,22 };
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->thread_count = 2;
    c->thread_type = 2;

    AVDictionary *param = 0;

    av_dict_set(&param, "preset", "slow", 0);
    av_dict_set(&param, "tune", "zerolatency", 0);

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
    }



    AVFrame *pFrame = av_frame_alloc();

    start_time=av_gettime();
    while (1) {

       AVPacket encoded_pkt;

       av_init_packet(&encoded_pkt);
       encoded_pkt.data = NULL;
       encoded_pkt.size = 0;

       AVStream *in_stream, *out_stream;
       //Get an AVPacket
       ret = av_read_frame(ifmt_ctx, &pkt);
       if (ret < 0) {
           break;
       }

       //FIX:No PTS (Example: Raw H.264)
       //Simple Write PTS
       if(pkt.pts==AV_NOPTS_VALUE){
           //Write PTS
           AVRational time_base1=ifmt_ctx->streams[videoindex]->time_base;
           //Duration between 2 frames (us)
           int64_t calc_duration=(double)AV_TIME_BASE/av_q2d(ifmt_ctx->streams[videoindex]->r_frame_rate);
           //Parameters
           pkt.pts=(double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE);
           pkt.dts=pkt.pts;
           pkt.duration=(double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE);
       }
       //Important:Delay
       if(pkt.stream_index==videoindex){
           AVRational time_base=ifmt_ctx->streams[videoindex]->time_base;
           AVRational time_base_q={1,AV_TIME_BASE};
           int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q);
           int64_t now_time = av_gettime() - start_time;
           if (pts_time > now_time) {
               av_usleep(pts_time - now_time);
           }

       }

       in_stream  = ifmt_ctx->streams[pkt.stream_index];
       out_stream = ofmt_ctx->streams[pkt.stream_index];
       /* copy packet */
       //Convert PTS/DTS
       //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
       //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
       pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
       pkt.pos = -1;

       //Print to Screen
       if(pkt.stream_index==videoindex){
           //printf("Send %8d video frames to output URL\n",frame_index);
           frame_index++;
       }



       // Decode and Encode
       if(pkt.stream_index == videoindex) {

           ret = avcodec_send_packet(context, &pkt);

           if(ret<0){
               NSLog(@"avcode send packet error");
           }

           ret = avcodec_receive_frame(context, pFrame);
           if(ret<0){
               NSLog(@"avcodec receive frame error");
           }

           ret = avcodec_send_frame(c, pFrame);

           if(ret < 0){
               NSLog(@"avcodec send frame - %s", av_err2str(ret));
           }

           ret = avcodec_receive_packet(c, &encoded_pkt);

           if(ret < 0){
               NSLog(@"avcodec receive packet error");
           }

       }

       //ret = av_write_frame(ofmt_ctx, &pkt);

       //encoded_pkt.stream_index = pkt.stream_index;
       av_packet_rescale_ts(&encoded_pkt, c->time_base, ofmt_ctx->streams[videoindex]->time_base);


       ret = av_interleaved_write_frame(ofmt_ctx, &encoded_pkt);

       if (ret < 0) {
           printf( "Error muxing packet\n");
           break;
       }

       av_packet_unref(&encoded_pkt);
       av_free_packet(&pkt);

    }
    //Write file trailer
    av_write_trailer(ofmt_ctx);
  • FFMPEG CLI Language Metadata Tagging Output MP4 (Video + Audio) File

    12 juin 2017, par DMtd

    Can anyone help me understand why my attempts at tagging an audio track with language metadata using FFMPEG CLI is failing ? I’ve found various pieces of information on the correct syntax for the tag which I’m still not clear on (so part 1 of my question is what is the correct syntax), but even if I get it right (which I may or may not have), I am wondering if FFMPEG is failing to tag my audio track because my source is wrapped/muxed with a video essence. Does this preclude the ability to language tag ?

    Worth noting, attempts have been made from both MP4 and MKV sources (video and audio) to MP4 (video and audio) and M4A, MP4 and AAC audio only outputs with no success.

    Also worth noting, I’ve also tried using the -metadata title tag with no success.

    I am looking for the following metadata to show up in a mediainfo advanced mode check :

    Language : en
    Language : English

    Here is my command line :

    ffmpeg -i "input.mkv" -c:v libx264 -level:v 3.0 -b:v 5000k -bufsize 4300k -flags +ildct+ilme -top 1 -x264opts tff=1:colorprim=bt470bg:transfer=bt470m:colormatrix=bt470bg -vf crop=720:576:0:32 -pix_fmt yuv420p -c:a aac -b:a 128k -metadata language="eng" -aspect 4:3 -y "output.mp4"

  • videotoolbox : add hwcontext support

    15 mai 2017, par wm4
    videotoolbox : add hwcontext support
    

    This adds tons of code for no other benefit than making VideoToolbox
    support conform with the new hwaccel API (using hw_device_ctx and
    hw_frames_ctx).

    Since VideoToolbox decoding does not actually require the user to
    allocate frames, the new code does mostly nothing.

    One benefit is that ffmpeg_videotoolbox.c can be dropped once generic
    hwaccel support for ffmpeg.c is merged from Libav.

    Does not consider VDA or VideoToolbox encoding.

    Fun fact : the frame transfer functions are copied from vaapi, as the
    mapping makes copying generic boilerplate. Mapping itself is not
    exported by the VT code, because I don't know how to test.

    • [DH] doc/APIchanges
    • [DH] libavcodec/vda_vt_internal.h
    • [DH] libavcodec/version.h
    • [DH] libavcodec/videotoolbox.c
    • [DH] libavutil/Makefile
    • [DH] libavutil/hwcontext.c
    • [DH] libavutil/hwcontext.h
    • [DH] libavutil/hwcontext_internal.h
    • [DH] libavutil/hwcontext_videotoolbox.c
    • [DH] libavutil/hwcontext_videotoolbox.h
    • [DH] libavutil/version.h