
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (14278)
-
"moov atom not found" when using av_interleaved_write_frame but not avio_write
9 octobre 2017, par icStaticI am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.
If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
X:\Diagnostics.mp4: Invalid data found when processing inputffplay is unable to play the file generated using this method.
If I instead swap it out for a call to avio_write, ffprobe instead outputs :
Input #0, h264, from 'X:\Diagnostics.mp4':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbcffplay can mostly play this file until it gets towards the end, when it outputs :
Input #0, h264, from 'X:\Diagnostics.mp4': 0KB sq= 0B f=0/0
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
[h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
[h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
nan M-V: nan fd= 1 aq= 0KB vq= 0KB sq= 0B f=0/0VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.
Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.
Code :
void main()
{
OutputStream Stream( "Output.mp4", 672, 380, 25, true );
Stream.Initialize();
int i = 100;
while( i-- )
{
//... Generate a frame
Stream.WriteFrame( Frame );
}
Stream.CloseFile();
}
OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
: Stream()
, FrameIndex( 0 )
{
auto& ID = *m_InternalData;
ID.Path = Path;
ID.Width = Width;
ID.Height= Height;
ID.Framerate.num = Framerate;
ID.Framerate.den = 1;
ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
ID.CodecID = AV_CODEC_ID_H264;
ID.CodecTag = 0;
ID.AspectRatio.num = 1;
ID.AspectRatio.den = 1;
}
CameraStreamError OutputStream::Initialize()
{
av_log_set_callback( &InputStream::LogCallback );
av_register_all();
avformat_network_init();
auto& ID = *m_InternalData;
av_init_packet( &ID.Packet );
int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
if( Result < 0 || !ID.FormatContext )
{
STREAM_ERROR( UnknownError );
}
AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );
if( !Encoder )
{
STREAM_ERROR( NoH264Support );
}
AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
if( !OutStream )
{
STREAM_ERROR( UnknownError );
}
ID.CodecContext = avcodec_alloc_context3( Encoder );
if( !ID.CodecContext )
{
STREAM_ERROR( NoH264Support );
}
ID.CodecContext->time_base = av_inv_q(ID.Framerate);
{
AVCodecParameters* CodecParams = OutStream->codecpar;
CodecParams->width = ID.Width;
CodecParams->height = ID.Height;
CodecParams->format = AV_PIX_FMT_YUV420P;
CodecParams->codec_id = ID.CodecID;
CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
CodecParams->profile = FF_PROFILE_H264_MAIN;
CodecParams->level = 40;
Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
if( Result < 0 )
{
STREAM_ERROR( EncoderCreationError );
}
}
if( ID.IsVideo )
{
ID.CodecContext->width = ID.Width;
ID.CodecContext->height = ID.Height;
ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
ID.CodecContext->time_base = av_inv_q(ID.Framerate);
if( Encoder->pix_fmts )
{
ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
}
else
{
ID.CodecContext->pix_fmt = ID.PixelFormat;
}
}
//Snip
Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
if( Result < 0 )
{
STREAM_ERROR( EncoderCreationError );
}
Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
if( Result < 0 )
{
STREAM_ERROR( EncoderCreationError );
}
if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
{
ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
OutStream->time_base = ID.CodecContext->time_base;
OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);
if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
{
Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
if( Result < 0 )
{
STREAM_ERROR( FileNotWriteable );
}
}
Result = avformat_write_header( ID.FormatContext, nullptr );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );
ID.ConversionContext = sws_getCachedContext(
ID.ConversionContext,
ID.Width,
ID.Height,
ID.PixelFormat,
ID.CodecContext->width,
ID.CodecContext->height,
ID.CodecContext->pix_fmt,
SWS_BICUBIC,
NULL,
NULL,
NULL );
return CameraStreamError::Success;
}
CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
{
auto& ID = *m_InternalData;
ID.Output->Prepare();
int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );
ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;
int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
if( Result == AVERROR(EAGAIN) )
{
CameraStreamError ResultErr = SendAll();
if( ResultErr != CameraStreamError::Success )
{
return ResultErr;
}
Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
}
if( Result == 0 )
{
CameraStreamError ResultErr = SendAll();
if( ResultErr != CameraStreamError::Success )
{
return ResultErr;
}
}
FrameIndex++;
return CameraStreamError::Success;
}
CameraStreamError OutputStream::SendAll( void )
{
auto& ID = *m_InternalData;
int Result;
do
{
AVPacket TempPacket = {};
av_init_packet( &TempPacket );
Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
if( Result == 0 )
{
av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );
TempPacket.stream_index = ID.FormatContext->streams[0]->index;
//avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
av_packet_unref( &TempPacket );
}
else if( Result != AVERROR(EAGAIN) )
{
continue;
}
else if( Result != AVERROR_EOF )
{
break;
}
else if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
} while ( Result == 0);
return CameraStreamError::Success;
}
CameraStreamError OutputStream::CloseFile()
{
auto& ID = *m_InternalData;
while( true )
{
//Flush
int Result = avcodec_send_frame( ID.CodecContext, nullptr );
if( Result == 0 )
{
CameraStreamError StrError = SendAll();
if( StrError != CameraStreamError::Success )
{
return StrError;
}
}
else if( Result == AVERROR_EOF )
{
break;
}
else
{
STREAM_ERROR( WriteFailed );
}
}
int Result = av_write_trailer( ID.FormatContext );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
{
Result = avio_close( ID.FormatContext->pb );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
}
return CameraStreamError::Success;
}Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.
Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.
-
"moov atom not found" when using av_interleaved_write_frame but not avio_write
9 octobre 2017, par icStaticI am attempting to put together a class that can take arbitrary frames and construct a video from it using the ffmpeg 3.3.3 API. I’ve been struggling to find a good example for this as the examples still seem to be using deprecated functions, so I’ve attempted to patch this using the documentation in the headers and by referring to a few github repos that seem to be using the new version.
If I use av_interleaved_write_frame to write the encoded packets to the output then ffprobe outputs the following :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002760120] moov atom not found0
X:\Diagnostics.mp4: Invalid data found when processing inputffplay is unable to play the file generated using this method.
If I instead swap it out for a call to avio_write, ffprobe instead outputs :
Input #0, h264, from 'X:\Diagnostics.mp4':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbcffplay can mostly play this file until it gets towards the end, when it outputs :
Input #0, h264, from 'X:\Diagnostics.mp4': 0KB sq= 0B f=0/0
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 672x380 [SAR 1:1 DAR 168:95], 25 fps, 25 tbr, 1200k tbn, 50 tbc
[h264 @ 000000000254ef80] error while decoding MB 31 22, bytestream -65
[h264 @ 000000000254ef80] concealing 102 DC, 102 AC, 102 MV errors in I frame
nan M-V: nan fd= 1 aq= 0KB vq= 0KB sq= 0B f=0/0VLC cannot play files from either method. The second method’s file displays a single black frame then hides the video output. The first does not display anything. Neither of them give a video duration.
Does anyone have any ideas what’s happening here ? I assume my solution is close to working as I’m getting a good chunk of valid frames coming through.
Code :
void main()
{
OutputStream Stream( "Output.mp4", 672, 380, 25, true );
Stream.Initialize();
int i = 100;
while( i-- )
{
//... Generate a frame
Stream.WriteFrame( Frame );
}
Stream.CloseFile();
}
OutputStream::OutputStream( const std::string& Path, unsigned int Width, unsigned int Height, int Framerate, bool IsBGR )
: Stream()
, FrameIndex( 0 )
{
auto& ID = *m_InternalData;
ID.Path = Path;
ID.Width = Width;
ID.Height= Height;
ID.Framerate.num = Framerate;
ID.Framerate.den = 1;
ID.PixelFormat = IsBGR ? AV_PIX_FMT_BGR24 : AV_PIX_FMT_RGB24;
ID.CodecID = AV_CODEC_ID_H264;
ID.CodecTag = 0;
ID.AspectRatio.num = 1;
ID.AspectRatio.den = 1;
}
CameraStreamError OutputStream::Initialize()
{
av_log_set_callback( &InputStream::LogCallback );
av_register_all();
avformat_network_init();
auto& ID = *m_InternalData;
av_init_packet( &ID.Packet );
int Result = avformat_alloc_output_context2( &ID.FormatContext, nullptr, nullptr, ID.Path.c_str() );
if( Result < 0 || !ID.FormatContext )
{
STREAM_ERROR( UnknownError );
}
AVCodec* Encoder = avcodec_find_encoder( ID.CodecID );
if( !Encoder )
{
STREAM_ERROR( NoH264Support );
}
AVStream* OutStream = avformat_new_stream( ID.FormatContext, Encoder );
if( !OutStream )
{
STREAM_ERROR( UnknownError );
}
ID.CodecContext = avcodec_alloc_context3( Encoder );
if( !ID.CodecContext )
{
STREAM_ERROR( NoH264Support );
}
ID.CodecContext->time_base = av_inv_q(ID.Framerate);
{
AVCodecParameters* CodecParams = OutStream->codecpar;
CodecParams->width = ID.Width;
CodecParams->height = ID.Height;
CodecParams->format = AV_PIX_FMT_YUV420P;
CodecParams->codec_id = ID.CodecID;
CodecParams->codec_type = AVMEDIA_TYPE_VIDEO;
CodecParams->profile = FF_PROFILE_H264_MAIN;
CodecParams->level = 40;
Result = avcodec_parameters_to_context( ID.CodecContext, CodecParams );
if( Result < 0 )
{
STREAM_ERROR( EncoderCreationError );
}
}
if( ID.IsVideo )
{
ID.CodecContext->width = ID.Width;
ID.CodecContext->height = ID.Height;
ID.CodecContext->sample_aspect_ratio = ID.AspectRatio;
ID.CodecContext->time_base = av_inv_q(ID.Framerate);
if( Encoder->pix_fmts )
{
ID.CodecContext->pix_fmt = Encoder->pix_fmts[0];
}
else
{
ID.CodecContext->pix_fmt = ID.PixelFormat;
}
}
//Snip
Result = avcodec_open2( ID.CodecContext, Encoder, nullptr );
if( Result < 0 )
{
STREAM_ERROR( EncoderCreationError );
}
Result = avcodec_parameters_from_context( OutStream->codecpar, ID.CodecContext );
if( Result < 0 )
{
STREAM_ERROR( EncoderCreationError );
}
if( ID.FormatContext->oformat->flags & AVFMT_GLOBALHEADER )
{
ID.CodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
OutStream->time_base = ID.CodecContext->time_base;
OutStream->avg_frame_rate= av_inv_q(OutStream->time_base);
if( !( ID.FormatContext->oformat->flags & AVFMT_NOFILE ) )
{
Result = avio_open( &ID.FormatContext->pb, ID.Path.c_str(), AVIO_FLAG_WRITE );
if( Result < 0 )
{
STREAM_ERROR( FileNotWriteable );
}
}
Result = avformat_write_header( ID.FormatContext, nullptr );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
ID.Output = std::make_unique( ID.CodecContext->width, ID.CodecContext->height, ID.CodecContext->pix_fmt );
ID.ConversionContext = sws_getCachedContext(
ID.ConversionContext,
ID.Width,
ID.Height,
ID.PixelFormat,
ID.CodecContext->width,
ID.CodecContext->height,
ID.CodecContext->pix_fmt,
SWS_BICUBIC,
NULL,
NULL,
NULL );
return CameraStreamError::Success;
}
CameraStreamError OutputStream::WriteFrame( FFMPEG::Frame* Frame )
{
auto& ID = *m_InternalData;
ID.Output->Prepare();
int OutputSliceSize = sws_scale( m_InternalData->ConversionContext, Frame->GetFrame()->data, Frame->GetFrame()->linesize, 0, Frame->GetHeight(), ID.Output->GetFrame()->data, ID.Output->GetFrame()->linesize );
ID.Output->GetFrame()->pts = ID.CodecContext->frame_number;
int Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
if( Result == AVERROR(EAGAIN) )
{
CameraStreamError ResultErr = SendAll();
if( ResultErr != CameraStreamError::Success )
{
return ResultErr;
}
Result = avcodec_send_frame( GetData().CodecContext, ID.Output->GetFrame() );
}
if( Result == 0 )
{
CameraStreamError ResultErr = SendAll();
if( ResultErr != CameraStreamError::Success )
{
return ResultErr;
}
}
FrameIndex++;
return CameraStreamError::Success;
}
CameraStreamError OutputStream::SendAll( void )
{
auto& ID = *m_InternalData;
int Result;
do
{
AVPacket TempPacket = {};
av_init_packet( &TempPacket );
Result = avcodec_receive_packet( GetData().CodecContext, &TempPacket );
if( Result == 0 )
{
av_packet_rescale_ts( &TempPacket, ID.CodecContext->time_base, ID.FormatContext->streams[0]->time_base );
TempPacket.stream_index = ID.FormatContext->streams[0]->index;
//avio_write( ID.FormatContext->pb, TempPacket.data, TempPacket.size );
Result = av_interleaved_write_frame( ID.FormatContext, &TempPacket );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
av_packet_unref( &TempPacket );
}
else if( Result != AVERROR(EAGAIN) )
{
continue;
}
else if( Result != AVERROR_EOF )
{
break;
}
else if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
} while ( Result == 0);
return CameraStreamError::Success;
}
CameraStreamError OutputStream::CloseFile()
{
auto& ID = *m_InternalData;
while( true )
{
//Flush
int Result = avcodec_send_frame( ID.CodecContext, nullptr );
if( Result == 0 )
{
CameraStreamError StrError = SendAll();
if( StrError != CameraStreamError::Success )
{
return StrError;
}
}
else if( Result == AVERROR_EOF )
{
break;
}
else
{
STREAM_ERROR( WriteFailed );
}
}
int Result = av_write_trailer( ID.FormatContext );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
if( !(ID.FormatContext->oformat->flags& AVFMT_NOFILE) )
{
Result = avio_close( ID.FormatContext->pb );
if( Result < 0 )
{
STREAM_ERROR( WriteFailed );
}
}
return CameraStreamError::Success;
}Note I’ve simplified a few things and inlined a few bits that were elsewhere. I’ve also removed all the shutdown code as anything that happens after the file is closed is irrelevant.
Full repo here : https://github.com/IanNorris/Witness If you clone this the issue is with the ’Diagnostics’ output, the Output file is fine. There are two hardcoded paths to X :.
-
lavf/mp3dec : avoid printing useless message in default log level
8 mars 2016, par Moritz Barsnick