Recherche avancée

Médias (91)

Autres articles (54)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (6396)

  • "moov atom not found" when using av_interleaved_write_frame but not avio_write

    9 octobre 2017, par icStatic

    I am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.

    If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
    X:\Diagnostics.mp4: Invalid data found when processing input

    ffplay is unable to play the file generated using this method.

    If I instead swap it out for a call to avio_write, ffprobe instead outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc

    ffplay can mostly play this file until it gets towards the end, when it outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':    0KB sq=    0B f=0/0
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
    [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
       nan M-V:    nan fd=   1 aq=    0KB vq=    0KB sq=    0B f=0/0

    VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.

    Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.

    Code :

    void main()
    {
       OutputStream Stream( "Output.mp4", 672, 380, 25, true );
       Stream.Initialize();

       int i = 100;
       while( i-- )
       {
           //... Generate a frame

           Stream.WriteFrame( Frame );
       }
       Stream.CloseFile();
    }

    OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
    : Stream()
    , FrameIndex( 0 )
    {
       auto& ID = *m_InternalData;

       ID.Path = Path;

       ID.Width = Width;
       ID.Height= Height;
       ID.Framerate.num = Framerate;
       ID.Framerate.den = 1;

       ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
       ID.CodecID = AV_CODEC_ID_H264;
       ID.CodecTag = 0;

       ID.AspectRatio.num = 1;
       ID.AspectRatio.den = 1;
    }

    CameraStreamError OutputStream::Initialize()
    {
       av_log_set_callback( &InputStream::LogCallback );
       av_register_all();
       avformat_network_init();

       auto& ID = *m_InternalData;

       av_init_packet( &ID.Packet );

       int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
       if( Result < 0 || !ID.FormatContext )
       {
           STREAM_ERROR( UnknownError );
       }

       AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );

       if( !Encoder )
       {
           STREAM_ERROR( NoH264Support );
       }

       AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
       if( !OutStream )
       {
           STREAM_ERROR( UnknownError );
       }

       ID.CodecContext = avcodec_alloc_context3( Encoder );
       if( !ID.CodecContext )
       {
           STREAM_ERROR( NoH264Support );
       }

       ID.CodecContext->time_base = av_inv_q(ID.Framerate);

       {
           AVCodecParameters* CodecParams = OutStream->codecpar;

           CodecParams->width = ID.Width;
           CodecParams->height = ID.Height;
           CodecParams->format = AV_PIX_FMT_YUV420P;
           CodecParams->codec_id = ID.CodecID;
           CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
           CodecParams->profile = FF_PROFILE_H264_MAIN;
           CodecParams->level = 40;

           Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
           if( Result < 0 )
           {
               STREAM_ERROR( EncoderCreationError );
           }
       }

       if( ID.IsVideo )
       {
           ID.CodecContext->width = ID.Width;
           ID.CodecContext->height = ID.Height;
           ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
           ID.CodecContext->time_base = av_inv_q(ID.Framerate);

           if( Encoder->pix_fmts )
           {
               ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
           }
           else
           {
               ID.CodecContext->pix_fmt = ID.PixelFormat;
           }
       }
       //Snip

       Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       OutStream->time_base = ID.CodecContext->time_base;
       OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);

       if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
       {
           Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
           if( Result < 0 )
           {
               STREAM_ERROR( FileNotWriteable );
           }
       }

       Result = avformat_write_header( ID.FormatContext, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );

       ID.ConversionContext = sws_getCachedContext(
           ID.ConversionContext,
           ID.Width,
           ID.Height,
           ID.PixelFormat,
           ID.CodecContext->width,
           ID.CodecContext->height,
           ID.CodecContext->pix_fmt,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
    {
       auto& ID = *m_InternalData;

       ID.Output->Prepare();

       int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );

       ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;

       int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       if( Result == AVERROR(EAGAIN) )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
           Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       }

       if( Result == 0 )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
       }

       FrameIndex++;

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::SendAll( void )
    {
       auto& ID = *m_InternalData;

       int Result;
       do
       {
           AVPacket TempPacket = {};
           av_init_packet( &TempPacket );

           Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
           if( Result == 0 )
           {
               av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );

               TempPacket.stream_index = ID.FormatContext->streams[0]->index;

               //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
               Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
               if( Result < 0 )
               {
                   STREAM_ERROR( WriteFailed );
               }

               av_packet_unref( &TempPacket );
           }
           else if( Result != AVERROR(EAGAIN) )
           {
               continue;
           }
           else if( Result != AVERROR_EOF )
           {
               break;
           }
           else if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       } while ( Result == 0);

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::CloseFile()
    {
       auto& ID = *m_InternalData;

       while( true )
       {
           //Flush
           int Result = avcodec_send_frame( ID.CodecContext, nullptr );
           if( Result == 0 )
           {
               CameraStreamError StrError = SendAll();
               if( StrError != CameraStreamError::Success )
               {
                   return StrError;
               }
           }
           else if( Result == AVERROR_EOF )
           {
               break;
           }
           else
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       int Result = av_write_trailer( ID.FormatContext );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
       {
           Result = avio_close( ID.FormatContext->pb );
           if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       return CameraStreamError::Success;
    }

    Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.

    Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.

  • "moov atom not found" when using av_interleaved_write_frame but not avio_write

    9 octobre 2017, par icStatic

    I am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.

    If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
    X:\Diagnostics.mp4: Invalid data found when processing input

    ffplay is unable to play the file generated using this method.

    If I instead swap it out for a call to avio_write, ffprobe instead outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc

    ffplay can mostly play this file until it gets towards the end, when it outputs :

    Input #0, h264, from 'X:\Diagnostics.mp4':    0KB sq=    0B f=0/0
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
    [h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
       nan M-V:    nan fd=   1 aq=    0KB vq=    0KB sq=    0B f=0/0

    VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.

    Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.

    Code :

    void main()
    {
       OutputStream Stream( "Output.mp4", 672, 380, 25, true );
       Stream.Initialize();

       int i = 100;
       while( i-- )
       {
           //... Generate a frame

           Stream.WriteFrame( Frame );
       }
       Stream.CloseFile();
    }

    OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
    : Stream()
    , FrameIndex( 0 )
    {
       auto& ID = *m_InternalData;

       ID.Path = Path;

       ID.Width = Width;
       ID.Height= Height;
       ID.Framerate.num = Framerate;
       ID.Framerate.den = 1;

       ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
       ID.CodecID = AV_CODEC_ID_H264;
       ID.CodecTag = 0;

       ID.AspectRatio.num = 1;
       ID.AspectRatio.den = 1;
    }

    CameraStreamError OutputStream::Initialize()
    {
       av_log_set_callback( &InputStream::LogCallback );
       av_register_all();
       avformat_network_init();

       auto& ID = *m_InternalData;

       av_init_packet( &ID.Packet );

       int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
       if( Result < 0 || !ID.FormatContext )
       {
           STREAM_ERROR( UnknownError );
       }

       AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );

       if( !Encoder )
       {
           STREAM_ERROR( NoH264Support );
       }

       AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
       if( !OutStream )
       {
           STREAM_ERROR( UnknownError );
       }

       ID.CodecContext = avcodec_alloc_context3( Encoder );
       if( !ID.CodecContext )
       {
           STREAM_ERROR( NoH264Support );
       }

       ID.CodecContext->time_base = av_inv_q(ID.Framerate);

       {
           AVCodecParameters* CodecParams = OutStream->codecpar;

           CodecParams->width = ID.Width;
           CodecParams->height = ID.Height;
           CodecParams->format = AV_PIX_FMT_YUV420P;
           CodecParams->codec_id = ID.CodecID;
           CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
           CodecParams->profile = FF_PROFILE_H264_MAIN;
           CodecParams->level = 40;

           Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
           if( Result < 0 )
           {
               STREAM_ERROR( EncoderCreationError );
           }
       }

       if( ID.IsVideo )
       {
           ID.CodecContext->width = ID.Width;
           ID.CodecContext->height = ID.Height;
           ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
           ID.CodecContext->time_base = av_inv_q(ID.Framerate);

           if( Encoder->pix_fmts )
           {
               ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
           }
           else
           {
               ID.CodecContext->pix_fmt = ID.PixelFormat;
           }
       }
       //Snip

       Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
       if( Result < 0 )
       {
           STREAM_ERROR( EncoderCreationError );
       }

       if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
       {
           ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       OutStream->time_base = ID.CodecContext->time_base;
       OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);

       if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
       {
           Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
           if( Result < 0 )
           {
               STREAM_ERROR( FileNotWriteable );
           }
       }

       Result = avformat_write_header( ID.FormatContext, nullptr );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );

       ID.ConversionContext = sws_getCachedContext(
           ID.ConversionContext,
           ID.Width,
           ID.Height,
           ID.PixelFormat,
           ID.CodecContext->width,
           ID.CodecContext->height,
           ID.CodecContext->pix_fmt,
           SWS_BICUBIC,
           NULL,
           NULL,
           NULL );

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
    {
       auto& ID = *m_InternalData;

       ID.Output->Prepare();

       int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );

       ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;

       int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       if( Result == AVERROR(EAGAIN) )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
           Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
       }

       if( Result == 0 )
       {
           CameraStreamError ResultErr = SendAll();
           if( ResultErr != CameraStreamError::Success )
           {
               return ResultErr;
           }
       }

       FrameIndex++;

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::SendAll( void )
    {
       auto& ID = *m_InternalData;

       int Result;
       do
       {
           AVPacket TempPacket = {};
           av_init_packet( &TempPacket );

           Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
           if( Result == 0 )
           {
               av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );

               TempPacket.stream_index = ID.FormatContext->streams[0]->index;

               //avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
               Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
               if( Result < 0 )
               {
                   STREAM_ERROR( WriteFailed );
               }

               av_packet_unref( &TempPacket );
           }
           else if( Result != AVERROR(EAGAIN) )
           {
               continue;
           }
           else if( Result != AVERROR_EOF )
           {
               break;
           }
           else if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       } while ( Result == 0);

       return CameraStreamError::Success;
    }

    CameraStreamError OutputStream::CloseFile()
    {
       auto& ID = *m_InternalData;

       while( true )
       {
           //Flush
           int Result = avcodec_send_frame( ID.CodecContext, nullptr );
           if( Result == 0 )
           {
               CameraStreamError StrError = SendAll();
               if( StrError != CameraStreamError::Success )
               {
                   return StrError;
               }
           }
           else if( Result == AVERROR_EOF )
           {
               break;
           }
           else
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       int Result = av_write_trailer( ID.FormatContext );
       if( Result < 0 )
       {
           STREAM_ERROR( WriteFailed );
       }

       if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
       {
           Result = avio_close( ID.FormatContext->pb );
           if( Result < 0 )
           {
               STREAM_ERROR( WriteFailed );
           }
       }

       return CameraStreamError::Success;
    }

    Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.

    Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.

  • lavf/mp3dec : avoid printing useless message in default log level

    8 mars 2016, par Moritz Barsnick
    lavf/mp3dec : avoid printing useless message in default log level
    

    "Skipping 0 bytes of junk" is useless to the user, and essentially
    indicates a NOP. At 0 bytes, this message is now pushed back to
    the verbose log level.

    Signed-off-by : Moritz Barsnick <barsnick@gmx.net>

    • [DH] libavformat/mp3dec.c