
Recherche avancée
Autres articles (24)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (3324)
-
flv created using ffmpeg library plays too fast
25 avril 2015, par Muhammad AliI am muxing an h264 annex-b stream and an ADTS AAC stream coming from IP Camera into an FLV. I have gone through all the necessary things (that I knew of) e-g stripping ADTS header from AAC and converting H264 annex-b to AVC.
I am able to create the flv file which plays but it plays fast. The params of my output format video codec are :-
Time base = 1/60000 <-- I don't know why
Bitrate = 591949 (591Kbps)
GOP Size = 12
FPS = 30 Fps (that's the rate encoder sends me data at)Params for output format audio codec are :-
Timebase = 1/44100
Bitrate = 45382 (45Kbps)
Sample rate = 48000I am using NO_PTS for both audio and video.
The resultant video has double the bit rate (2x(audio bitrate + vid bitrate)) and half the duration.
If i play the resultant video in ffplay the video playsback fast so it ends quickly but audio plays on its original time. So even after the video has ended quickly the audio still plays till its full duration.If I set pts and dts equal to an increasing index (separate indices for audio and video) the video plays Super fast, bit rate shoots to an insane value and video duration gets very short but audio plays fine and on time.
EDIT :
Duration: 00:00:09.96, start: 0.000000, bitrate: 1230 kb/s
Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 591 kb/s, 30.33 fps, 59.94 tbr, 1k tbn, 59.94 tbc
Stream #0:1: Audio: aac, 48000 Hz, mono, fltp, 45 kb/sWhy is tbr 59.94 ? how was that calculated ? maybe that is the problem ?
Code for muxing :
if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
{
if((packet.data[0] == 0x00) && (packet.data[1] == 0x00) && (packet.data[2]==0x00) && (packet.data[3]==0x01))
{
unsigned char tempCurrFrameLength[4];
unsigned int nal_unit_length;
unsigned char nal_unit_type;
unsigned int cursor = 0;
int size = packet.header.dataLen;
do {
av_init_packet(&pkt);
int currFrameLength = 0;
if((packet.header.frameType == TRANSFER_FRAME_IDR_VIDEO) || (packet.header.frameType == TRANSFER_FRAME_I_VIDEO))
{
//pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = packet.header.streamId;//0;//ost->st->index; //stream index 0 for vid : 1 for aud
outStreamIndex = outputVideoStreamIndex;
/*vDuration += (packet.header.dataPTS - lastvPts);
lastvPts = packet.header.dataPTS;
pkt.pts = pkt.dts= packet.header.dataPTS;*/
pkt.pts = pkt.dts = AV_NOPTS_VALUE;
if(framebuff != NULL)
{
//printf("Framebuff has mem alloc : freeing 1\n\n");
free(framebuff);
framebuff = NULL;
//printf("free successfully \n\n");
}
nal_unit_length = GetOneNalUnit(&nal_unit_type, packet.data + cursor/*pData+cursor*/, size-cursor);
if(nal_unit_length > 0 && nal_unit_type > 0)
{
}
else
{
printf("Fatal error : nal unit lenth wrong \n\n");
exit(0);
}
write_header_done = 1;
//#define _USE_SPS_PPS //comment this line to write everything on to the stream. SPS+PPSframeframe
#ifdef _USE_SPS_PPS
if (nal_unit_type == 0x07 /*NAL_SPS*/)
{ // write sps
printf("Got SPS \n");
if (_sps == NULL)
{
_sps_size = nal_unit_length -4;
_sps = new U8[_sps_size];
memcpy(_sps, packet.data+cursor+4, _sps_size); //exclude start code 0x00000001
}
}
else if (nal_unit_type == 0x08/*NAL_PPS*/)
{ // write pps
printf("Got PPS \n");
if (_pps == NULL)
{
_pps_size = nal_unit_length -4;
_pps = new U8[_pps_size];
memcpy(_pps, packet.data+cursor+4, _pps_size); //exclude start code 0x00000001
//out_stream->codec->extradata
//ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata
free(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata);
ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata = (uint8_t*)av_mallocz(_sps_size + _pps_size);
memcpy(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata,_sps,_sps_size);
memcpy(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata + _sps_size,_pps,_pps_size);
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
//fprintf(stderr, "Error occurred when opening output file\n");
printf("Error occured when opening output \n");
exit(0);
}
write_header_done = 1;
printf("Done writing header \n");
}
}
//else
#endif /*end _USE_SPS_PPS */
{ //IDR Frame
videoPts++;
if( (nal_unit_type == 0x06) || (nal_unit_type == 0x09) || (nal_unit_type == 0x07) || (nal_unit_type == 0x08))
{
av_free_packet(&pkt);
cursor += nal_unit_length;
continue;
}
if( (nal_unit_type == 0x05) || (nal_unit_type == 0x05))
{
//videoPts++;
}
if ((nal_unit_type != 0x07) && (nal_unit_type != 0x08))
{
vDuration += (packet.header.dataPTS - lastvPts);
lastvPts = packet.header.dataPTS;
//pkt.pts = pkt.dts= packet.header.dataPTS;
pkt.pts = pkt.dts= AV_NOPTS_VALUE;//videoPts;
}
else
{
//probably sps pps ... no need to transmit. free the packet
//av_free_packet(&pkt);
pkt.pts = pkt.dts = AV_NOPTS_VALUE;
}
currFrameLength = nal_unit_length - 4;//packet.header.dataLen -4;
tempCurrFrameLength[3] = currFrameLength;
tempCurrFrameLength[2] = currFrameLength>>8;
tempCurrFrameLength[1] = currFrameLength>>16;
tempCurrFrameLength[0] = currFrameLength>>24;
if(nal_unit_type == 0x05)
{
pkt.flags |= AV_PKT_FLAG_KEY;
}
framebuff = (unsigned char *)malloc(sizeof(unsigned char)* /*packet.header.dataLen*/nal_unit_length );
if(framebuff == NULL)
{
printf("Failed to allocate memory for frame \n\n ");
exit(0);
}
memcpy(framebuff, tempCurrFrameLength,0x04);
//memcpy(&framebuff[4], &packet.data[4] , currFrameLength);
//put_buffer(pData + cursor + 4, nal_unit_length - 4);// save ES data
memcpy(framebuff+4,packet.data + cursor + 4, currFrameLength );
pkt.data = framebuff;
pkt.size = nal_unit_length;//packet.header.dataLen ;
//printf("\nPrinting Frame| Size: %d | NALU Lenght: %d | NALU: %02x \n",pkt.size,nal_unit_length ,nal_unit_type);
/* GET READY TO TRANSMIT THE packet */
//pkt.duration = vDuration;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[outStreamIndex];
cn = out_stream->codec;
//av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
pkt.stream_index = outStreamIndex;
if (!write_header_done)
{
}
else
{
//doxygen suggests i use av_write_frame if i am taking care of interleaving
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
//ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0)
{
fprintf(stderr, "Error muxing Video packet\n");
continue;
}
}
/*for(int ii = 0; ii < pkt.size ; ii++)
printf("%02x ",framebuff[ii]);*/
av_free_packet(&pkt);
if(framebuff != NULL)
{
//printf("Framebuff has mem alloc : freeing 2\n\n");
free(framebuff);
framebuff = NULL;
//printf("Freeing successfully \n\n");
}
/* TRANSMIT DONE */
}
cursor += nal_unit_length;
}while(cursor < size);
}
else
{
printf("This is not annex B bitstream \n\n");
for(int ii = 0; ii < packet.header.dataLen ; ii++)
printf("%02x ",packet.data[ii]);
printf("\n\n");
exit(0);
}
//video frame has been parsed completely.
continue;
}
else if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
{
av_init_packet(&pkt);
pkt.flags = 1;
pkt.pts = audioPts*1024;
pkt.dts = audioPts*1024;
//pkt.duration = 1024;
pkt.stream_index = packet.header.streamId + 1;//1;//ost->st->index; //stream index 0 for vid : 1 for aud
outStreamIndex = outputAudioStreamIndex;
//aDuration += (packet.header.dataPTS - lastaPts);
//lastaPts = packet.header.dataPTS;
//NOTE: audio sync requires this value
pkt.pts = pkt.dts= AV_NOPTS_VALUE ;
//pkt.pts = pkt.dts=audioPts++;
pkt.data = (uint8_t *)packet.data;//raw_data;
pkt.size = packet.header.dataLen;
}
//packet.header.streamId
//now assigning pkt.data in repsective if statements above
//pkt.data = (uint8_t *)packet.data;//raw_data;
//pkt.size = packet.header.dataLen;
//pkt.duration = 24000; //24000 assumed basd on observation
//duration calculation
/*if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
{
pkt.duration = vDuration;
}
else*/ if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
{
//pkt.duration = aDuration;
}
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[outStreamIndex];
cn = out_stream->codec;
if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
ret= av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, packet.data/*pkt.data*/, packet.header.dataLen, pkt.flags & AV_PKT_FLAG_KEY);
if(ret < 0)
{
printf("Failed to execute aac bitstream filter \n\n");
exit(0);
}
//if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
// av_bitstream_filter_filter(h264bsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, packet.data/*pkt.data*/, pkt.size, 0);
pkt.flags = 1;
//NOTE : Commented the lines below synced audio and video streams
//av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);
//pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
//pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
//enabled on Tuesday
pkt.pos = -1;
pkt.stream_index = outStreamIndex;
//doxygen suggests i use av_write_frame if i am taking care of interleaving
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
//ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0)
{
fprintf(stderr, "Error muxing packet\n");
continue;
}
av_free_packet(&pkt);
if(framebuff != NULL)
{
//printf("Framebuff has mem alloc : freeing 2\n\n");
free(framebuff);
framebuff = NULL;
//printf("Freeing successfully \n\n");
}
} -
Why cant my smartphone plays a video ?
6 juillet 2015, par John SmithI create a video from frames, with no sound with ffmpeg :
ffmpeg -f image2 -r 1 -i "img%d.png" -vcodec libx264 -pix_fmt yuv420p -movflags faststart x.mp4
on desktop, it goes well. But on smartphone, Firefox say :
no video with supported format and MIME type found.
The source images are 1024x768. What could I do ? HTML5 is :
<video controls="controls" autoplay="autoplay">
<source type="video/mp4" src="/x.mp4"></source>
</video> -
iOS FFMpeg video stream plays in fast motion for 1-2 sec
6 août 2015, par bipin kumarI have a video feed app in which FFmpeg library is used to show the video from rtsp server, after buffering a stream the stream plays in fast motion for a short while (1-2 seconds) and then plays at normal speed.