
Recherche avancée
Autres articles (40)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Initialisation de MediaSPIP (préconfiguration)
20 février 2010, parLors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)
Sur d’autres sites (9254)
-
PTS not set after decoding H264/RTSP stream
29 septembre 2017, par Sergio BasurcoQuestion : What does the Libav/FFmpeg decoding pipeline need in order to produce valid presentation timestamps (PTS) in the decoded AVFrames ?
I’m decoding an H264 stream received via RTSP. I use Live555 to parse H264 and feed the stream to my LibAV decoder. Decoding and displaying is working fine, except I’m not using timestamp info and get some stuttering.
After getting a frame with
avcodec_decode_video2
, the presentation timestamp (PTS) is not set.I need the PTS in order to find out for how long each frame needs to be displayed, and avoid any stuttering.
Notes on my pipeline
- I get the SPS/PPS information via Live555, I copy these values to my
AVCodecContext->extradata
. - I also send the SPS and PPS to my decoder as NAL units, with the appended 0,0,0,1 startcode.
- Live555 provides presentation timestamps for each packet, these are in most cases not monotonically increasing. The stream contains B-frames.
- My
AVCodecContext->time_base
is not valid, value is 0/2.
Unclear :
- Where exactly should I set the NAL PTS coming from my H264 sink (Live555) ? As the AVPacket->dts, pts, none, or both ?
- Why is my
time_base
value not valid ? Where is this information ? - According to the RTP payload spec. It seems that
The RTP timestamp is set to the sampling timestamp of the content. A 90 kHz clock rate MUST be used.
- Does this mean that I must always asume a 1/90000 timebase for the decoder ? What if some other value is specified in the SPS ?
- I get the SPS/PPS information via Live555, I copy these values to my
-
How to add watermark while record video using ffmpeg
7 décembre 2017, par Hitesh GehlotI am recording square video using following ffmpeg library. Square Video recorded successfully with audio but now i want to add watermark on video but my application crashed and getting error. Please help me how can i add watermark while record video.
Gradle dependency
compile(group: 'org.bytedeco', name: 'javacv-platform', version: '1.3') {
exclude group: 'org.bytedeco.javacpp-presets'
}
compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '3.2.1-1.3'
compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '3.2.1-1.3', classifier: 'android-arm'I am using following code for add watermark while record video
// add water mark
String imgPath = audioPath + "jodelicon.png";
String watermark = "movie=" + imgPath + " [logo];[in][logo]overlay=0:0:1:format=rgb [out]";
filters.add(watermark);Complete Code
class VideoRecordThread extends Thread {
private boolean isRunning;
@Override
public void run() {
List<string> filters = new ArrayList<>();
// Transpose
String transpose = null;
android.hardware.Camera.CameraInfo info =
new android.hardware.Camera.CameraInfo();
android.hardware.Camera.getCameraInfo(mCameraId, info);
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
switch (info.orientation) {
case 270:
// transpose = "transpose=clock_flip"; // Same as preview display
transpose = "transpose=cclock"; // Mirrored horizontally as preview display
break;
case 90:
// transpose = "transpose=cclock_flip"; // Same as preview display
transpose = "transpose=clock"; // Mirrored horizontally as preview display
break;
}
} else {
switch (info.orientation) {
case 270:
transpose = "transpose=cclock";
break;
case 90:
transpose = "transpose=clock";
break;
}
}
if (transpose != null) {
filters.add(transpose);
}
// Crop (only vertically)
int width = previewHeight;
int height = width * videoHeight / videoWidth;
String crop = String.format("crop=%d:%d:%d:%d",
width, height,
(previewHeight - width) / 2, (previewWidth - height) / 2);
filters.add(crop);
// Scale (to designated size)
String scale = String.format("scale=%d:%d", videoHeight, videoWidth);
filters.add(scale);
// add water mark
String imgPath = audioPath + "jodelicon.png";
String watermark = "movie=" + imgPath + " [logo];[in][logo]overlay=0:0:1:format=rgb [out]";
filters.add(watermark);
FFmpegFrameFilter frameFilter = new FFmpegFrameFilter(TextUtils.join(",", filters),
previewWidth, previewHeight);
frameFilter.setPixelFormat(avutil.AV_PIX_FMT_NV21);
frameFilter.setFrameRate(frameRate);
try {
frameFilter.start();
} catch (FrameFilter.Exception e) {
e.printStackTrace();
}
isRunning = true;
FrameToRecord recordedFrame;
while (isRunning || !mFrameToRecordQueue.isEmpty()) {
try {
recordedFrame = mFrameToRecordQueue.take();
} catch (InterruptedException ie) {
ie.printStackTrace();
try {
frameFilter.stop();
} catch (FrameFilter.Exception e) {
e.printStackTrace();
}
break;
}
if (mFrameRecorder != null) {
long timestamp = recordedFrame.getTimestamp();
if (timestamp > mFrameRecorder.getTimestamp()) {
mFrameRecorder.setTimestamp(timestamp);
}
long startTime = System.currentTimeMillis();
// Frame filteredFrame = recordedFrame.getFrame();
Frame filteredFrame = null;
try {
frameFilter.push(recordedFrame.getFrame());
filteredFrame = frameFilter.pull();
} catch (FrameFilter.Exception e) {
e.printStackTrace();
}
try {
mFrameRecorder.record(filteredFrame);
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
long endTime = System.currentTimeMillis();
long processTime = endTime - startTime;
mTotalProcessFrameTime += processTime;
Log.d(LOG_TAG, "This frame process time: " + processTime + "ms");
long totalAvg = mTotalProcessFrameTime / ++mFrameRecordedCount;
Log.d(LOG_TAG, "Avg frame process time: " + totalAvg + "ms");
}
Log.d(LOG_TAG, mFrameRecordedCount + " / " + mFrameToRecordCount);
mRecycledFrameQueue.offer(recordedFrame);
}
}
public void stopRunning() {
this.isRunning = false;
if (getState() == WAITING) {
interrupt();
}
}
public boolean isRunning() {
return isRunning;
}
}
</string>Error in logcat
android A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x18 in tid 320 (Thread-24542)
-
Syncing 3 RTSP video streams in ffmpeg
26 septembre 2017, par Damon MariaI’m using an AXIS Q3708 camera. It internally has 3 sensors to create a 180º view. Each sensor puts out it’s own RTSP stream. To put together the full 180º view I need to pull an image from each stream and place the images side-by-side. Obviously it’s important that the 3 streams be synchronized so the 3 images were taken at the same ’wall clock’ time. For this reason I want to use ffmpeg because it should be a champ at this.
I intended to use the hstack filter to combine the 3 images. However, it’s causing me a lot of grief and errors.
What I’ve tried :
1. Hstack the rtsp streams :
ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=2" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=3" -filter_complex "[0:v][1:v][2:v]hstack=inputs=3[v]" -map "[v]" out.mp4
I get lots of RTSP dropped packets decoding errors which is strange given this is an i7-7700K 4.2GHz with an NVIDIA GTX 1080 Ti and 32GB of RAM and the camera is on a local gigabit network :
[rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=136 speed=1.16x
[rtsp @ 0xd4eca42a00] RTP: missed 5 packets
[rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=137 speed=1.15x
[rtsp @ 0xd4eca42a00] RTP: missed 4 packets
[h264 @ 0xd4ecbb3520] error while decoding MB 14 15, bytestream -21
[h264 @ 0xd4ecbb3520] concealing 1185 DC, 1185 AC, 1185 MV errors in P frame2. Using
ffmpeg -i %s -c:v copy -map 0:0 out.mp4
to save each stream to a file and then run the abovehstack
command with the 3 files rather than 3 RSTP streams. First off, there are no dropped packets saving the files, and the hstack runs at speed=25x so I don’t know why the operation in 1 had so many errors. But in the resulting video, some parts ’pause’ between frames as tho the same image was used across 2 frames for some of the hstack inputs, but not the others. Also, the ’scene’ at a set distance into the video lags behind the input videos – which would happen if the frames are being duplicated.3. If I use the RTSP streams as the input, and for the output specify
-f null -
(thenull
demuxer) then the demuxer reports a lot of these errors :[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 1 >= 1
Last message repeated 1 times
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2 >= 2
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 3 >= 3
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 4 >= 4
Last message repeated 1 times
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 5 >= 5
Last message repeated 1 timesWhich sounds again like frames are being duplicated.
4. If I add
-vsync cfr
then the null muxer no longer reportsnon monotonically increasing dts
, and dropped packet / decoding errors are reduced (but still there). Does this show that timing info from the RTSP streams is ’tripping up’ ffmpeg ? I presume it’s not a solution tho because it is essentially wiping out and replacing the timing information ffmpeg would need to use to sync ?5. Saving a single RTSP stream (re-encoding, not using the
copy
codec ornull
demuxer) logs a lot of warnings like :Past duration 0.999992 too large
Last message repeated 7 times
Past duration 0.999947 too large
Past duration 0.999992 too large6. I first tried performing this in code (using PyAV). But struck problems when pushing a frame from each container into the 3 hstack inputs would cause hstack to output multiple frames (when it should be only one). Again, this points to hstack duplicating frames.
7. I have used Wireshark to sniff the RTCP/RTP traffic and the RTCP Sender Report’s have correct NTP timestamps in them matched to the timestamps in the RTP streams.
8. Using ffprobe to show the frames of a RTSP stream (example below) I would have expected to see real (NTP based timestamps) given they exist in the RTCP packets. I’m not sure what is correct behaviour for ffprobe. But it does show that most frame timestamps are not exactly 0.25s (the camera is running 4 FPS) which might explain
-vsync cfr
’fixing’ some issues and thePast duration 0.999992
style errors :pkt_pts=1012502
pkt_pts_time=0:00:11.250022
pkt_dts=1012502
pkt_dts_time=0:00:11.250022
best_effort_timestamp=1012502
best_effort_timestamp_time=0:00:11.250022
pkt_duration=N/A
pkt_duration_time=N/A
pkt_pos=N/AI posted this as a possible hstack bug on the ffmpeg bug tracker but that discussion fizzled out.
So, question : how do I sync 3 RTSP video streams through hstack in ffmpeg ?