Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (71)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (6887)

  • Can I use "&" in TCL to make FFMPEG command run background ?

    20 novembre 2017, par M. D. P

    my code is :

    proc a {} {


    exec ffmpeg -f dshow -t 00:00:10 -i "video=Integrated Webcam" c:/test/sample-a.avi &

    }
    a

    & is not working, as described in link : https://www.tcl.tk/man/tcl8.5/tutorial/Tcl26.html

    can any can provide me proper code which will work with tcl and ffmpeg to capture my video in background, except using thread.

  • "moov atom not found" when using av_interleaved_write_frame but not avio_write

    9 octobre 2017, par icStatic

    I am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.

    If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
    X:\Diagnostics.mp4: Invalid data found when processing input

    ffplay is unable to play the file generated using this method.

    If I instead swap it out for a call to avio_write, ffprobe instead outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc

    ffplay can mostly play this file until it gets towards the end, when it outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':    0KB sq=    0B f=0/0
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
    [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
       nan M-V:    nan fd=   1 aq=    0KB vq=    0KB sq=    0B f=0/0

    VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.

    Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.

    Code :

    void main()
    {
       OutputStream Stream( "Output.mp4", 672, 380, 25, true );
       Stream.Initialize();

       int i = 100;
       while( i-- )
       {
           //... Generate a frame

           Stream.WriteFrame( Frame );
       }
       Stream.CloseFile();
    }

    OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
    : Stream()
    , FrameIndex( 0 )
    {
       auto& ID = *m_InternalData;

       ID.Path = Path;

       ID.Width = Width;
       ID.Height= Height;
       ID.Framerate.num = Framerate;
       ID.Framerate.den = 1;

       ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
       ID.CodecID = AV_CODEC_ID_H264;
       ID.CodecTag = 0;

       ID.AspectRatio.num = 1;
       ID.AspectRatio.den = 1;
    }

    CameraStreamError OutputStream::Initialize()
    {
       av_log_set_callback( &InputStream::LogCallback );
       av_register_all();
       avformat_network_init();

       auto& ID = *m_InternalData;

       av_init_packet( &ID.Packet );

       int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
       if( Result < 0 || !ID.FormatContext )
       {
           STREAM_ERROR( UnknownError );
       }

       AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );

       if( !Encoder )
       {
           STREAM_ERROR( NoH264Support );
       }

       AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
       if( !OutStream )
       {
           STREAM_ERROR( UnknownError );
       }

       ID.CodecContext = avcodec_alloc_context3( Encoder );
       if( !ID.CodecContext )
       {
           STREAM_ERROR( NoH264Support );
       }

       ID.CodecContext->time_base = av_inv_q(ID.Framerate);

       {
           AVCodecParameters* CodecParams = OutStream->codecpar;

           CodecParams->width = ID.Width;
           CodecParams->height = ID.Height;
           CodecParams->format = AV_PIX_FMT_YUV420P;
           CodecParams->codec_id = ID.CodecID;
           CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
           CodecParams->profile = FF_PROFILE_H264_MAIN;
           CodecParams->level = 40;

           Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
           if( Result < 0 )
           {
               STREAM_ERROR( EncoderCreationError );
           }
       }

       if( ID.IsVideo )
       {
           ID.CodecContext->width = ID.Width;
           ID.CodecContext->height = ID.Height;
           ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
           ID.CodecContext->time_base = av_inv_q(ID.Framerate);

           if( Encoder->pix_fmts )
           {
               ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
           }
           else
           {
               ID.CodecContext->pix_fmt = ID.PixelFormat;
           }
       }
       //Snip

       Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       OutStream->time_base = ID.CodecContext->time_base;
       OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);

       if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
       {
           Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
           if( Result < 0 )
           {
               STREAM_ERROR( FileNotWriteable );
           }
       }

       Result = avformat_write_header( ID.FormatContext, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );

       ID.ConversionContext = sws_getCachedContext(
           ID.ConversionContext,
           ID.Width,
           ID.Height,
           ID.PixelFormat,
           ID.CodecContext->width,
           ID.CodecContext->height,
           ID.CodecContext->pix_fmt,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
    {
       auto& ID = *m_InternalData;

       ID.Output->Prepare();

       int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );

       ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;

       int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       if( Result == AVERROR(EAGAIN) )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
           Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       }

       if( Result == 0 )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
       }

       FrameIndex++;

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::SendAll( void )
    {
       auto& ID = *m_InternalData;

       int Result;
       do
       {
           AVPacket TempPacket = {};
           av_init_packet( &TempPacket );

           Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
           if( Result == 0 )
           {
               av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );

               TempPacket.stream_index = ID.FormatContext->streams[0]->index;

               //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
               Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
               if( Result < 0 )
               {
                   STREAM_ERROR( WriteFailed );
               }

               av_packet_unref( &TempPacket );
           }
           else if( Result != AVERROR(EAGAIN) )
           {
               continue;
           }
           else if( Result != AVERROR_EOF )
           {
               break;
           }
           else if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       } while ( Result == 0);

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::CloseFile()
    {
       auto& ID = *m_InternalData;

       while( true )
       {
           //Flush
           int Result = avcodec_send_frame( ID.CodecContext, nullptr );
           if( Result == 0 )
           {
               CameraStreamError StrError = SendAll();
               if( StrError != CameraStreamError::Success )
               {
                   return StrError;
               }
           }
           else if( Result == AVERROR_EOF )
           {
               break;
           }
           else
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       int Result = av_write_trailer( ID.FormatContext );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
       {
           Result = avio_close( ID.FormatContext->pb );
           if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       return CameraStreamError::Success;
    }

    Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.

    Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.

  • "moov atom not found" when using av_interleaved_write_frame but not avio_write

    9 octobre 2017, par icStatic

    I am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.

    If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
    X:\Diagnostics.mp4: Invalid data found when processing input

    ffplay is unable to play the file generated using this method.

    If I instead swap it out for a call to avio_write, ffprobe instead outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc

    ffplay can mostly play this file until it gets towards the end, when it outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':    0KB sq=    0B f=0/0
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
    [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
       nan M-V:    nan fd=   1 aq=    0KB vq=    0KB sq=    0B f=0/0

    VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.

    Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.

    Code :

    void main()
    {
       OutputStream Stream( "Output.mp4", 672, 380, 25, true );
       Stream.Initialize();

       int i = 100;
       while( i-- )
       {
           //... Generate a frame

           Stream.WriteFrame( Frame );
       }
       Stream.CloseFile();
    }

    OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
    : Stream()
    , FrameIndex( 0 )
    {
       auto& ID = *m_InternalData;

       ID.Path = Path;

       ID.Width = Width;
       ID.Height= Height;
       ID.Framerate.num = Framerate;
       ID.Framerate.den = 1;

       ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
       ID.CodecID = AV_CODEC_ID_H264;
       ID.CodecTag = 0;

       ID.AspectRatio.num = 1;
       ID.AspectRatio.den = 1;
    }

    CameraStreamError OutputStream::Initialize()
    {
       av_log_set_callback( &InputStream::LogCallback );
       av_register_all();
       avformat_network_init();

       auto& ID = *m_InternalData;

       av_init_packet( &ID.Packet );

       int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
       if( Result < 0 || !ID.FormatContext )
       {
           STREAM_ERROR( UnknownError );
       }

       AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );

       if( !Encoder )
       {
           STREAM_ERROR( NoH264Support );
       }

       AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
       if( !OutStream )
       {
           STREAM_ERROR( UnknownError );
       }

       ID.CodecContext = avcodec_alloc_context3( Encoder );
       if( !ID.CodecContext )
       {
           STREAM_ERROR( NoH264Support );
       }

       ID.CodecContext->time_base = av_inv_q(ID.Framerate);

       {
           AVCodecParameters* CodecParams = OutStream->codecpar;

           CodecParams->width = ID.Width;
           CodecParams->height = ID.Height;
           CodecParams->format = AV_PIX_FMT_YUV420P;
           CodecParams->codec_id = ID.CodecID;
           CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
           CodecParams->profile = FF_PROFILE_H264_MAIN;
           CodecParams->level = 40;

           Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
           if( Result < 0 )
           {
               STREAM_ERROR( EncoderCreationError );
           }
       }

       if( ID.IsVideo )
       {
           ID.CodecContext->width = ID.Width;
           ID.CodecContext->height = ID.Height;
           ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
           ID.CodecContext->time_base = av_inv_q(ID.Framerate);

           if( Encoder->pix_fmts )
           {
               ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
           }
           else
           {
               ID.CodecContext->pix_fmt = ID.PixelFormat;
           }
       }
       //Snip

       Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       OutStream->time_base = ID.CodecContext->time_base;
       OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);

       if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
       {
           Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
           if( Result < 0 )
           {
               STREAM_ERROR( FileNotWriteable );
           }
       }

       Result = avformat_write_header( ID.FormatContext, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );

       ID.ConversionContext = sws_getCachedContext(
           ID.ConversionContext,
           ID.Width,
           ID.Height,
           ID.PixelFormat,
           ID.CodecContext->width,
           ID.CodecContext->height,
           ID.CodecContext->pix_fmt,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
    {
       auto& ID = *m_InternalData;

       ID.Output->Prepare();

       int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );

       ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;

       int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       if( Result == AVERROR(EAGAIN) )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
           Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       }

       if( Result == 0 )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
       }

       FrameIndex++;

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::SendAll( void )
    {
       auto& ID = *m_InternalData;

       int Result;
       do
       {
           AVPacket TempPacket = {};
           av_init_packet( &TempPacket );

           Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
           if( Result == 0 )
           {
               av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );

               TempPacket.stream_index = ID.FormatContext->streams[0]->index;

               //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
               Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
               if( Result < 0 )
               {
                   STREAM_ERROR( WriteFailed );
               }

               av_packet_unref( &TempPacket );
           }
           else if( Result != AVERROR(EAGAIN) )
           {
               continue;
           }
           else if( Result != AVERROR_EOF )
           {
               break;
           }
           else if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       } while ( Result == 0);

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::CloseFile()
    {
       auto& ID = *m_InternalData;

       while( true )
       {
           //Flush
           int Result = avcodec_send_frame( ID.CodecContext, nullptr );
           if( Result == 0 )
           {
               CameraStreamError StrError = SendAll();
               if( StrError != CameraStreamError::Success )
               {
                   return StrError;
               }
           }
           else if( Result == AVERROR_EOF )
           {
               break;
           }
           else
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       int Result = av_write_trailer( ID.FormatContext );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
       {
           Result = avio_close( ID.FormatContext->pb );
           if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       return CameraStreamError::Success;
    }

    Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.

    Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.