
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (12)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (3163)
-
Compute PTS and DTS correctly to sync audio and video ffmpeg C++
14 août 2015, par Kaidul IslamI am trying to mux H264 encoded data and G711 PCM data into
mov
multimedia container. I am creatingAVPacket
from encoded data and initially the PTS and DTS value of video/audio frames is equivalent toAV_NOPTS_VALUE
. So I calculated the DTS using current time information. My code -bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
.....................................
.....................................
.....................................
AVPacket pkt = {0};
av_init_packet(&pkt);
int64_t dts = av_gettime();
dts = av_rescale_q(dts, (AVRational){1, 1000000}, m_pVideoStream->time_base);
int duration = 90000 / VIDEO_FRAME_RATE;
if(m_prevVideoDts > 0LL) {
duration = dts - m_prevVideoDts;
}
m_prevVideoDts = dts;
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = m_currVideoDts;
m_currVideoDts += duration;
pkt.duration = duration;
if(bIFrame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = m_pVideoStream->index;
pkt.data = (uint8_t*) pData;
pkt.size = iDataSize;
int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);
if(ret < 0) {
LogErr("Writing video frame failed.");
return false;
}
Log("Writing video frame done.");
av_free_packet(&pkt);
return true;
}
bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
.................................
.................................
.................................
AVPacket pkt = {0};
av_init_packet(&pkt);
int64_t dts = av_gettime();
dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});
int duration = AUDIO_STREAM_DURATION; // 20
if(m_prevAudioDts > 0LL) {
duration = dts - m_prevAudioDts;
}
m_prevAudioDts = dts;
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = m_currAudioDts;
m_currAudioDts += duration;
pkt.duration = duration;
pkt.stream_index = m_pAudioStream->index;
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.data = (uint8_t*) pEncodedData;
pkt.size = iDataSize;
int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);
if(ret < 0) {
LogErr("Writing audio frame failed: %d", ret);
return false;
}
Log("Writing audio frame done.");
av_free_packet(&pkt);
return true;
}And I added stream like this -
AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
................................
.................................
pStream = avformat_new_stream(m_pFormatCtx, codec);
if (!pStream) {
LogErr("Could not allocate stream.");
return NULL;
}
pStream->id = m_pFormatCtx->nb_streams - 1;
pCodecCtx = pStream->codec;
pCodecCtx->codec_id = codecID;
switch(codec->type) {
case AVMEDIA_TYPE_VIDEO:
pCodecCtx->bit_rate = VIDEO_BIT_RATE;
pCodecCtx->width = PICTURE_WIDTH;
pCodecCtx->height = PICTURE_HEIGHT;
pStream->time_base = (AVRational){1, 90000};
pStream->avg_frame_rate = (AVRational){90000, 1};
pStream->r_frame_rate = (AVRational){90000, 1}; // though the frame rate is variable and around 15 fps
pCodecCtx->pix_fmt = STREAM_PIX_FMT;
m_pVideoStream = pStream;
break;
case AVMEDIA_TYPE_AUDIO:
pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
pCodecCtx->bit_rate = AUDIO_BIT_RATE;
pCodecCtx->sample_rate = AUDIO_SAMPLE_RATE;
pCodecCtx->channels = 1;
m_pAudioStream = pStream;
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (m_pOutputFmt->flags & AVFMT_GLOBALHEADER)
m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;
return pStream;
}There are several problems with this calculation :
-
The video is laggy and lags behind than audio increasingly with time.
-
Suppose, an audio frame is received (
WriteAudio(..)
) little lately like 3 seconds, then the late frame should be started playing with 3 second delay, but it’s not. The delayed frame is played consecutively with previous frame. -
Sometimes I recorded for 40 seconds but the file duration is much like 2 minutes, but audio/video is played only few moments like 40 seconds and rest of the file contains nothing and seekbar jumps at en immediately after 40 seconds (tested in VLC).
EDIT :
According to Ronald S. Bultje’s suggestion, what I’ve understand :
m_pAudioStream->time_base = (AVRational){1, 9000}; // actually no need to set as 9000 is already default value for audio as you said
m_pVideoStream->time_base = (AVRational){1, 9000};should be set as now both audio and video streams are now in same time base units.
And for video :
...................
...................
int64_t dts = av_gettime(); // get current time in microseconds
dts *= 9000;
dts /= 1000000; // 1 second = 10^6 microseconds
pkt.pts = AV_NOPTS_VALUE; // is it okay?
pkt.dts = dts;
// and no need to set pkt.duration, right?And for audio : (exactly same as video, right ?)
...................
...................
int64_t dts = av_gettime(); // get current time in microseconds
dts *= 9000;
dts /= 1000000; // 1 second = 10^6 microseconds
pkt.pts = AV_NOPTS_VALUE; // is it okay?
pkt.dts = dts;
// and no need to set pkt.duration, right?And I think they are now like sharing same
currDts
, right ? Please correct me if I am wrong anywhere or missing anything.Also, if I want to use video stream time base as
(AVRational){1, frameRate}
and audio stream time base as(AVRational){1, sampleRate}
, how the correct code should look like ?EDIT 2.0 :
m_pAudioStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
m_pVideoStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};And
bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
...........................
......................
AVPacket pkt = {0};
av_init_packet(&pkt);
int64_t dts = av_gettime() / 1000; // convert into millisecond
dts = dts * VIDEO_FRAME_RATE;
if(m_dtsOffset < 0) {
m_dtsOffset = dts;
}
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = (dts - m_dtsOffset);
pkt.stream_index = m_pAudioStream->index;
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.data = (uint8_t*) pEncodedData;
pkt.size = iDataSize;
int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);
if(ret < 0) {
LogErr("Writing audio frame failed: %d", ret);
return false;
}
Log("Writing audio frame done.");
av_free_packet(&pkt);
return true;
}
bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
........................................
.................................
AVPacket pkt = {0};
av_init_packet(&pkt);
int64_t dts = av_gettime() / 1000;
dts = dts * VIDEO_FRAME_RATE;
if(m_dtsOffset < 0) {
m_dtsOffset = dts;
}
pkt.pts = AV_NOPTS_VALUE;
pkt.dts = (dts - m_dtsOffset);
if(bIFrame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = m_pVideoStream->index;
pkt.data = (uint8_t*) pData;
pkt.size = iDataSize;
int ret = av_interleaved_write_frame(m_pFormatCtx, &pkt);
if(ret < 0) {
LogErr("Writing video frame failed.");
return false;
}
Log("Writing video frame done.");
av_free_packet(&pkt);
return true;
}Is the last change okay ? The video and audio seems synced. Only problem is - the audio is played without the delay regardless the packet arrived in delay.
Like -packet arrival : 1 2 3 4... (then next frame arrived after 3 sec) .. 5
audio played : 1 2 3 4 (no delay) 5
EDIT 3.0 :
zeroed audio sample data :
AVFrame* pSilentData;
pSilentData = av_frame_alloc();
memset(&pSilentData->data[0], 0, iDataSize);
pkt.data = (uint8_t*) pSilentData;
pkt.size = iDataSize;
av_freep(&pSilentData->data[0]);
av_frame_free(&pSilentData);Is this okay ? But after writing this into file container, there are dot dot noise during playing the media. Whats the problem ?
EDIT 4.0 :
Well, For µ-Law audio the zero value is represented as
0xff
. So -memset(&pSilentData->data[0], 0xff, iDataSize);
solve my problem.
-
-
ffmpeg ffserver - create a mosaic from two 720p webcam feeds
28 juillet 2015, par der_felixfor a project i would like to take the video feeds (NO audio) of two logitech c920 webcams, put them side-by-side and stream them.
the c920 is able to compress the video feed with h264 itself(if enabled) and delivers 1080p with upto 30fps.
the stream is then loaded in an android app by a ffmpeg library and rendered to the screen.what i already know :
i know that i can take multiple streams or input files and create a mosaic stream via the filter_complex module.
http and h264 seem to be good for streaming, but other configurations are also welcom if they are faster/better.the question :
but how can i start the cameras with v4l2, set the camera resoltution and camera internal encoding and use these streams to create the mosaic ?
the mosaic should be unscaled (=2560x720px).and i very often get the error code 256 but didnt find a solution what it means.
the system : laptop with usb3, ubuntu 15.04x64 ffmpeg 2.7.1 and ffserver 2.5.7
thanks for your help
ffserver config :
HTTPPort 8080
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 50000
CustomLog -
#NoDaemon
<feed>
File /tmp/feed1.ffm
Launch ffmpeg -f v4l2 - input_format h264 -i /dev/video0 -i /dev/video1 -size 1280x720 -r 30 -filter_complex "nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280" -c:v libx264 -f mpegts
</feed>
<stream>
Feed feed1.ffm
Format mpegts
VideoBitRate 1024
#VideoBufferSize 1024
VideoFrameRate 30
#VideoSize hd720
VideoSize 2560x720
#VideoIntraOnly
#VideoGopSize 12
VideoCodec libx264
NoAudio
VideoQMin 3
VideoQMax 31
NoDefaults
</stream>
<stream>
Format status
#Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</stream>output :
ubuntu@ubuntu:~$ ffserver
ffserver version 2.5.7-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Tue Jul 28 10:13:44 2015 FFserver started.
Tue Jul 28 10:13:44 2015 Launch command line: ffmpeg -f v4l2 - input_format h264 -i /dev/video0 -i /dev/video1 -size 1280x720 -r 30 -filter_complex nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280 -c:v libx264 -f mpegts http://127.0.0.1:8080/feed1.ffm
feed1.ffm: Pid 17388 exited with status 256 after 0 secondsHey guys !
Here is our plan b for the mosaic stream.
Alternative config :
HTTPPort 8080
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 50000
CustomLog -
<feed>
File /tmp/feedlinks.ffm
Launch ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video0 -video_size 1280x720 -r 30
</feed>
<feed>
File /tmp/feedrechts.ffm
Launch ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video1 -video_size 1280x720 -r 30
</feed>
<stream>
Feed feedlinks.ffm
Format mpegts
VideoBitRate 512
VideoFrameRate 30
VideoSize hd720
VideoCodec libx264
NoAudio
VideoQMin 3
VideoQMax 31
</stream>
<stream>
Feed feedrechts.ffm
Format mpegts
VideoBitRate 512
VideoFrameRate 30
VideoSize hd720
VideoCodec libx264
NoAudio
VideoQMin 3
VideoQMax 31
</stream>
<feed>
File /tmp/feedmosaic.ffm
Launch ffmpeg -i http://localhost:8080/testlinks.mpg -i http://localhost:8080/testrechts.mpg -filter_complex "nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280" -c:v libx264 -preset ultrafast -f mpegts
</feed>
<stream>
Feed feedmosaic.ffm
Format mpegts # Format of the stream
VideoFrameRate 30 # Number of frames per second
VideoSize 2560x720
VideoCodec libx264 # Choose your codecs.
NoAudio # Suppress audio
VideoQMin 3 # Videoquality ranges from 1 - 31 (worst to best)
VideoQMax 31
NoDefaults
</stream>
<stream> # Server status URL
Format status
# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
ACL allow 192.168.178.0 192.168.255.255
</stream>And this is the new output :
ubuntu@ubuntu:~$ ffserver
ffserver version 2.5.7-0ubuntu0.15.04.1 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.2 (Ubuntu 4.9.2-10ubuntu13)
configuration: --prefix=/usr --extra-version=0ubuntu0.15.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --shlibdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample --enable-avisynth --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libwavpack --enable-libwebp --enable-libxvid --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzvbi --enable-libzmq --enable-frei0r --enable-libvpx --enable-libx264 --enable-libsoxr --enable-gnutls --enable-openal --enable-libopencv --enable-librtmp --enable-libx265
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
/etc/ffserver.conf:44: Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
/etc/ffserver.conf:44: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/ffserver.conf:44: Setting default value for video max rate = 1024000. Use NoDefaults to disable it.
/etc/ffserver.conf:44: Setting default value for video buffer size = 1024000. Use NoDefaults to disable it.
/etc/ffserver.conf:61: Setting default value for video bit rate tolerance = 128000. Use NoDefaults to disable it.
/etc/ffserver.conf:61: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/ffserver.conf:61: Setting default value for video max rate = 1024000. Use NoDefaults to disable it.
/etc/ffserver.conf:61: Setting default value for video buffer size = 1024000. Use NoDefaults to disable it.
Tue Jul 28 11:13:01 2015 Codec bitrates do not match for stream 0
Tue Jul 28 11:13:01 2015 FFserver started.
Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video0 -video_size 1280x720 -r 30 http://127.0.0.1:8080/feedlinks.ffm
Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -f v4l2 -input_format h264 -vcodec h264 -i /dev/video1 -video_size 1280x720 -r 30 http://127.0.0.1:8080/feedrechts.ffm
Tue Jul 28 11:13:01 2015 Launch command line: ffmpeg -i http://localhost:8080/testlinks.mpg -i http://localhost:8080/testrechts.mpg -filter_complex nullsrc=size=2560x720 [base]; [0:v] setpts=PTS-STARTPTS [left]; [1:v] setpts=PTS-STARTPTS [right]; [base][left] overlay=shortest=1 [tmp1]; [tmp1][right] overlay=shortest=1:x=1280 -c:v libx264 -preset ultrafast -f mpegts http://127.0.0.1:8080/feedmosaic.ffm
Tue Jul 28 11:13:02 2015 127.0.0.1 - - [GET] "/feedlinks.ffm HTTP/1.1" 200 4175
Tue Jul 28 11:13:02 2015 127.0.0.1 - - [GET] "/feedrechts.ffm HTTP/1.1" 200 4175
Tue Jul 28 11:13:18 2015 127.0.0.1 - - [POST] "/feedmosaic.ffm HTTP/1.1" 200 4096
Tue Jul 28 11:13:18 2015 127.0.0.1 - - [GET] "/testlinks.mpg HTTP/1.1" 200 2130291
Tue Jul 28 11:13:18 2015 127.0.0.1 - - [GET] "/testrechts.mpg HTTP/1.1" 200 1244999
feedmosaic.ffm: Pid 18775 exited with status 256 after 17 secondsThanks for your help !
-
Ffmpeg : how to keep orientation when trimming video file ?
7 mars 2013, par AlexI have a video file which I capture from my Android program and save as an mp4 video.
In this my Android program I use
setOrientationHint(90)
call to indicate to a videoplayer that my camera has been rotated 90 degrees.
I'm not really sure what
setOrientationHint(90)
does but with it I can see the file properly oriented when it plays in the video player. If not, then a video player orients my file incorrectly.Now I trim this file using FFMPEG command (here
in.mp4, out.mp4, 1000
and2000
are just for example)ffmpeg -i in.mp4 -ss 1000 -t 2000 -vcodec copy -acodec
However, the resulting file is again wrongly oriented in the player.
I wonder what should I do to keep the orientation hint in the trimmed video file ?