
Recherche avancée
Autres articles (35)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (5469)
-
ios Crash when convert yuvj420p to CVPixelBufferRef use ffmpeg
20 mars 2020, par jansmaI need to get rtsp steam from ip camera and convert the AVFrame data to CVPixelBufferRef
in order to send the data to other sdkFirst I use
avcodec_decode_video2
to decode video dataAfter decode the video I convert the data to CVPixelBufferRef this is my code
size_t srcPlaneSize = pVideoFrame_->linesize[1]*pVideoFrame_->height;
size_t dstPlaneSize = srcPlaneSize *2;
uint8_t *dstPlane = malloc(dstPlaneSize);
void *planeBaseAddress[2] = { pVideoFrame_->data[0], dstPlane };
// This loop is very naive and assumes that the line sizes are the same.
// It also copies padding bytes.
assert(pVideoFrame_->linesize[1] == pVideoFrame_->linesize[2]);
for(size_t i = 0; i/ These might be the wrong way round.
dstPlane[2*i ]=pVideoFrame_->data[2][i];
dstPlane[2*i+1]=pVideoFrame_->data[1][i];
}
// This assumes the width and height are even (it's 420 after all).
assert(!pVideoFrame_->width%2 && !pVideoFrame_->height%2);
size_t planeWidth[2] = {pVideoFrame_->width, pVideoFrame_->width/2};
size_t planeHeight[2] = {pVideoFrame_->height, pVideoFrame_->height/2};
// I'm not sure where you'd get this.
size_t planeBytesPerRow[2] = {pVideoFrame_->linesize[0], pVideoFrame_->linesize[1]*2};
int ret = CVPixelBufferCreateWithPlanarBytes(
NULL,
pVideoFrame_->width,
pVideoFrame_->height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
2,
planeBaseAddress,
planeWidth,
planeHeight,
planeBytesPerRow,
NULL,
NULL,
NULL,
&pixelBuf);After I run the app the application will crash on
dstPlane[2*i ]=pVideoFrame_->data[2][i];
How to resove this question ?
this is console in xcode
All info found
Setting avg frame rate based on r frame rate
stream 0: start_time: 0.080 duration: -102481911520608.625
format: start_time: 0.080 duration: -9223372036854.775 bitrate=0 kb/s
nal_unit_type: 0, nal_ref_idc: 0
nal_unit_type: 7, nal_ref_idc: 3
nal_unit_type: 0, nal_ref_idc: 0
nal_unit_type: 8, nal_ref_idc: 3
Ignoring NAL type 0 in extradata
Ignoring NAL type 0 in extradata
nal_unit_type: 7, nal_ref_idc: 3
nal_unit_type: 8, nal_ref_idc: 3
nal_unit_type: 6, nal_ref_idc: 0
nal_unit_type: 5, nal_ref_idc: 3
unknown SEI type 229
Reinit context to 800x608, pix_fmt: yuvj420p
(lldb) -
FFMPEG : Too many packets buffered for output stream 0:1
3 février 2024, par Der MichaI want to add a logo to a video using FFMPEG. I encountered this error : "Too many packets buffered for output stream 0:1.", "Conversion Failed.". I tried with diffent pictures and videos, always got the same error. Google didn't help much either. I found a thread



C:\Users\Anwender\OneDrive - IT-Center Engels\_Programmierung & Scripting\delphi\_ITCE\Tempater\Win32\Debug\ffmpeg\bin>ffmpeg ^
Mehr? -i C:\Users\Anwender\Videos\CutErgebnis.mp4 ^
Mehr? -i C:\Users\Anwender\Pictures\pic.png ^
Mehr? -filter_complex "overlay=0:0" ^
Mehr? C:\Users\Anwender\Videos\Logo.mp4
ffmpeg version N-90054-g474194a8d0 Copyright (c) 2000-2018 the FFmpeg developers
 built with gcc 7.2.0 (GCC)
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth
 libavutil 56. 7.101 / 56. 7.101
 libavcodec 58. 11.101 / 58. 11.101
 libavformat 58. 9.100 / 58. 9.100
 libavdevice 58. 1.100 / 58. 1.100
 libavfilter 7. 12.100 / 7. 12.100
 libswscale 5. 0.101 / 5. 0.101
 libswresample 3. 0.101 / 3. 0.101
 libpostproc 55. 0.100 / 55. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Anwender\Videos\CutErgebnis.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.9.100
 comment : Captured with Snagit 13.1.3.7993
 : Microphone - Mikrofon (Steam Streaming Microphone)
 :
 Duration: 00:01:51.99, start: 0.015011, bitrate: 148 kb/s
 Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1918x718 [SAR 1:1 DAR 959:359], 149 kb/s, 14.79 fps, 15 tbr, 15k tbn, 30 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 1 kb/s (default)
 Metadata:
 handler_name : SoundHandler
Input #1, png_pipe, from 'C:\Users\Anwender\Pictures\pic.png':
 Duration: N/A, bitrate: N/A
 Stream #1:0: Video: png, pal8(pc), 400x400, 25 tbr, 25 tbn, 25 tbc
File 'C:\Users\Anwender\Videos\Logo.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
 Stream #0:0 (h264) -> overlay:main (graph 0)
 Stream #1:0 (png) -> overlay:overlay (graph 0)
 overlay (graph 0) -> Stream #0:0 (libx264)
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
Too many packets buffered for output stream 0:1.
[aac @ 000001f4c5257a40] Qavg: 65305.387
[aac @ 000001f4c5257a40] 2 frames left in the queue on closing
Conversion failed!




My FFMPEG Version :
ffmpeg-20180322-ed0e0fe-win64-static



Details about the Video :



C:\Users\Anwender\OneDrive - IT-Center Engels\_Programmierung & Scripting\delphi\_ITCE\Tempater\Win32\Debug\ffmpeg-20180322-ed0e0fe-win64-static\bin>ffprobe.exe C:\Users\Anwender\Videos\CutErgebnis.mp4
ffprobe version N-90399-ged0e0fe102 Copyright (c) 2007-2018 the FFmpeg developers
 built with gcc 7.3.0 (GCC)
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
 libavutil 56. 11.100 / 56. 11.100
 libavcodec 58. 15.100 / 58. 15.100
 libavformat 58. 10.100 / 58. 10.100
 libavdevice 58. 2.100 / 58. 2.100
 libavfilter 7. 13.100 / 7. 13.100
 libswscale 5. 0.102 / 5. 0.102
 libswresample 3. 0.101 / 3. 0.101
 libpostproc 55. 0.100 / 55. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Anwender\Videos\CutErgebnis.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.9.100
 comment : Captured with Snagit 13.1.3.7993
 : Microphone - Mikrofon (Steam Streaming Microphone)
 :
 Duration: 00:01:51.99, start: 0.015011, bitrate: 148 kb/s
 Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1918x718 [SAR 1:1 DAR 959:359], 149 kb/s, 14.79 fps, 15 tbr, 15k tbn, 30 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 1 kb/s (default)
 Metadata:
 handler_name : SoundHandler



-
How to stitch(concat) two transport stream with two different resolution and I-frame slices format without loosing resolution and slices information
2 octobre 2019, par AnkurTankI have been trying to test a use case with steam captured from multimedia device and that didn’t work. And then I have been trying to create this specific transport stream for like two days now without success, so requesting some help.
I need to create transport stream with two different resolution and two different slicing format.
I divided the task in following steps and in last two steps I need help.
Step 1 : Download sample video with resolution : 1920x1080.
I downloaded big buck bunny mp4 .Step 2 : Create transport stream with following
resolution : 1920x720, H264 I frame slices per frame : 1
I used following ffmpeg commands to do that.#Rename file to input.mp4
$ mv bbb_sunflower_1080p_30fps_normal.mp4 input.mp4
#Extract transport stream
$ ffmpeg -i input.mp4 -c copy first.tsfirst.ts is having 1980x720 resolution and one H264 I slice per frame.
Step 3 : Create another transport stream with smaller resolution using following commands
#Get mp4 with lower resolution.
$ ffmpeg -i input.mp4 -s 640x480 temp.mp4
#Extract trans port stream from mp4
$ ffmpeg -i temp.mp4 -c copy low_r.tsStep 4 : Edit(and re-encode ?) low_r.ts to have two H264 I frame slices.
I used following command to achieve it.$ x264 --slices 4 low_r.ts -o second.ts
However when I play this second.ts on vlc using following command it doesn’t play
$ vlc ./second.ts
And using Elacard StreamEye software when I analyze the transport stream I see that it has 4
H264 I slices
in only two times other than that lot ofH264 p slices
andH264 B slices
.
Need help here to figure out why second.ts doesn’t play and why slicing is not correct.Step 5 : Combine both the transport stream without loosing resolution and slicing information.
Don’t know command for this. Need help here.
I tried ffmpeg but that combines two stream with different resolution and makes one file with one resolution.Any suggestions/pointers would help me proceed. Let me also know if any of the above steps are not fine too.