
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (77)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (11491)
-
Dropped frames in ffmpeg when adding an overlay
31 janvier 2016, par JuananI’m trying to add a white overlay box using ffmpeg to an mp4 file downloaded from YouTube. I know the syntax and it is working OK with small videos, but I’m struggling to get it to work in a longer one. This is the exact command :
ffmpeg -i intro.mp4 -strict -2 -vf "color=white:200x50 [over]; [in][over] overlay=250:320:enable=between(t\,0\,52) [out]" out.mp4
Initially, it works like a charm :
$ ffmpeg -i intro.mp4 -preset slow -strict -2 -vf "color=white:200x50 [over]; [in][over] overlay=250:320:enable=between(t\,0\,52) [out]" -t 60 out.mp4
ffmpeg version 2.5.3 Copyright (c) 2000-2015 the FFmpeg developers
built on Mar 26 2015 23:00:53 with Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-avresample --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-libtheora --enable-libschroedinger --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libspeex --enable-libass --enable-libbluray --enable-lzma --enable-gnutls --enable-fontconfig --enable-libfreetype --enable-libfribidi --disable-indev=jack --disable-outdev=xv --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --enable-vda --arch=x86_64 --enable-yasm --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2014-01-30 16:48:50
Duration: 00:13:30.70, start: 0.000000, bitrate: 692 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 497 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 192 kb/s (default)
Metadata:
creation_time : 2014-01-30 16:48:54
handler_name : IsoMedia File Produced by Google, 5-11-2011
File 'out.mp4' already exists. Overwrite ? [y/N] y
[libx264 @ 0x7fa8cb82fa00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.1 Cache64
[libx264 @ 0x7fa8cb82fa00] profile High, level 3.1
[libx264 @ 0x7fa8cb82fa00] 264 - core 142 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=umh subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'out.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
encoder : Lavf56.15.102
Stream #0:0(und): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1280x720, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
Metadata:
handler_name : VideoHandler
encoder : Lavc56.13.100 libx264
Stream #0:1(und): Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
creation_time : 2014-01-30 16:48:54
handler_name : IsoMedia File Produced by Google, 5-11-2011
encoder : Lavc56.13.100 aac
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
frame= 1500 fps= 47 q=-1.0 Lsize= 2518kB time=00:01:00.00 bitrate= 343.8kbits/sBut after some time, that command starts yelling dropped frames :
frame=20269 fps= 21 q=-1.0 Lsize= 23283kB time=00:13:30.70 bitrate= 235.3kbits/s dup=0 drop=548431
and it never ends. Any clue ?
-
very low latency streaminig with ffmpeg using a webcam
22 mars 2017, par userDtrmI’m trying to configure ffmpeg to do a real-time video streaming using a webcam. The ffmpeg encoder command I use is as follows.
ffmpeg -f v4l2 -input_format yuyv422 -s 640x480 -i /dev/video0 -c:v libx264 -profile:v baseline -trellis 0 -subq 1 -level 32 -preset superfast -tune zerolatency -me_method epzs -crf 30 -threads 0 -bufsize 1 -refs 4 -coder 0 -b_strategy 0 -bf 0 -sc_threshold 0 -x264-params vbv-maxrate=2000:slice-max-size=1500:keyint=30:min-keyint=10: -pix_fmt yuv420p -an -f mpegts udp://192.168.1.8:5001
The ffplay command used to display the video feed is,
ffplay -analyzeduration 1 -fflags -nobuffer -i udp://192.168.1.8:5001
However, I’m experiencing a latency of 0.5 - 1.0s latency in the video stream. Is there a way to reduce this to a number less than 100ms. Also, when I replace the v4l2 camera capture with a screen capture using x11grab, the stream is almost real-time and I experience no noticeable delays. Moreover, changing the encoder from x264 to mpeg2 had no effect on the latency. In addition, the statistics from the ffmpeg shows that the encoder is performing at a 30fps rate, which I believe indicates that the encoding is real-time. This leaves me with only one reason for the experienced delay.
- Is there a significant delay in buffers when using v4l2 during video capturing in a webcam ?
- I don’t think the transmission delay is in effect in this case as I see no latencies when screen capture is used under the same conditions.
- Can this latency be further reduced ?. Can someone think of a different encoder configuration to be used instead of the one that I’ve been using ?.
Any suggestions or assistance is highly appreciated.
Thank You.
-
Using FFmpeg encode and UDP with a Webcam ?
14 mars, par RendresI'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.


I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.


My code is.


#include 
#include 
#include 
#include 

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libswscale></libswscale>swscale.h>
#include <libswresample></libswresample>swresample.h>
}

#include <opencv2></opencv2>opencv.hpp>

using namespace std;
using namespace cv;

#define WIDTH 640
#define HEIGHT 480
#define CODEC_ID AV_CODEC_ID_H264
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P

static AVFrame *frame, *pFrameBGR;

int main(int argc, char **argv)
{
VideoCapture cap(0);
const char *url = "udp://127.0.0.1:8080";

AVFormatContext *formatContext;
AVStream *stream;
AVCodec *codec;
AVCodecContext *c;
AVDictionary *opts = NULL;

int ret, got_packet;

if (!cap.isOpened())
{
 return -1;
}

av_log_set_level(AV_LOG_TRACE);

av_register_all();
avformat_network_init();

avformat_alloc_output_context2(&formatContext, NULL, "h264", url);
if (!formatContext)
{
 av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for '%s'.\n", url);
}

codec = avcodec_find_encoder(CODEC_ID);
if (!codec)
{
 av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");
}

stream = avformat_new_stream(formatContext, codec);

c = avcodec_alloc_context3(codec);

stream->id = formatContext->nb_streams - 1;
stream->time_base = (AVRational){1, 25};

c->codec_id = CODEC_ID;
c->bit_rate = 400000;
c->width = WIDTH;
c->height = HEIGHT;
c->time_base = stream->time_base;
c->gop_size = 12;
c->pix_fmt = STREAM_PIX_FMT;

if (formatContext->flags & AVFMT_GLOBALHEADER)
 c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

av_dict_set(&opts, "preset", "fast", 0);

av_dict_set(&opts, "tune", "zerolatency", 0);

ret = avcodec_open2(c, codec, NULL);
if (ret < 0)
{
 av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
}

pFrameBGR = av_frame_alloc();
if (!pFrameBGR)
{
 av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
}

frame = av_frame_alloc();
if (!frame)
{
 av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");
}

frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;

ret = avcodec_parameters_from_context(stream->codecpar, c);
if (ret < 0)
{
 av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");
}

av_dump_format(formatContext, 0, url, 1);

ret = avformat_write_header(formatContext, NULL);
if (ret != 0)
{
 av_log(NULL, AV_LOG_ERROR, "Failed to connect to '%s'.\n", url);
}

Mat image(Size(HEIGHT, WIDTH), CV_8UC3);
SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
int frame_pts = 0;

while (1)
{
 cap >> image;

 int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);
 uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));

 avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);
 avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);

 sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);

 AVPacket pkt = {0};
 av_init_packet(&pkt);

 frame->pts = frame_pts;

 ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
 if (ret < 0)
 {
 av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");
 }

 if (got_packet)
 {
 pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);
 pkt.stream_index = stream->index;

 return av_interleaved_write_frame(formatContext, &pkt);

 cout << "Seguro que si" << endl;
 }
 frame_pts++;
}

avcodec_free_context(&c);
av_frame_free(&frame);
avformat_free_context(formatContext);

return 0;
}



The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.


As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.


My console log when I run the executable is.


[100%] Built target display
[OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the actual bit rate lower than SpatialBitrate
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can't be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected: HTT: Y, MMX: Y, MMXEX: Y, SSE: Y, SSE2: Y, SSE3: Y, SSSE3: Y, SSE4.1: Y, SSE4.2: Y, AVX: Y, FMA: Y, X87-FPU: Y, 3DNOW: N, 3DNOWEX: N, ALTIVEC: N, CMOV: Y, MOVBE: Y, AES: Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES): 64
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.
Output #0, h264, to 'udp://192.168.100.39:8080':
Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0
[libopenh264 @ 0x244aa00] 6 slices
./scriptBuild.sh: line 20: 10625 Segmentation fault (core dumped) ./display



As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...


I've used commands like this.


ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080



And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.


My FFmpeg version is.


ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc='gcc -fPIC'
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100



I tried to get more information of the error using gbd, but it didn't give me debugging info.


How can I solve this problem ?