
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (59)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8611)
-
Unity : Converting Texture2D to YUV420P using FFmpeg
23 juillet 2021, par strong_kobayashiI'm trying to create a game in Unity where each frame is rendered into a texture and then put together into a video using FFmpeg. The output created by FFmpeg should eventually be sent over the network to a client UI. However, I'm struggling mainly with the part where a frame is caught, and passed to an unsafe method as a byte array where it should be processed further by FFmpeg. The wrapper I'm using is FFmpeg.AutoGen.



The render to texture method :



private IEnumerator CaptureFrame()
{
 yield return new WaitForEndOfFrame();

 RenderTexture.active = rt;
 frame.ReadPixels(rect, 0, 0);
 frame.Apply();

 bytes = frame.GetRawTextureData();

 EncodeAndWrite(bytes, bytes.Length);
}




The unsafe encoding method so far :



private unsafe void EncodeAndWrite(byte[] bytes, int size)
{
 GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
 IntPtr address = pinned.AddrOfPinnedObject();

 sbyte** inData = (sbyte**)address;
 fixed(int* lineSize = new int[1])
 {
 lineSize[0] = 4 * textureWidth;
 // Convert RGBA to YUV420P
 ffmpeg.sws_scale(sws, inData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
 }

 inputFrame->pts = frameCounter++;

 if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
 throw new ApplicationException("Error sending a frame for encoding!");

 pkt = new AVPacket();
 fixed(AVPacket* packet = &pkt)
 ffmpeg.av_init_packet(packet);
 pkt.data = null;
 pkt.size = 0;

 pinned.Free();
 ...
}




sws_scale
takes asbyte**
as the second parameter, therefore I'm trying to convert the input byte array tosbyte**
by first pinning it withGCHandle
and doing an explicit type conversion afterwards. I don't know if that's the correct way, though.


Moreover, the condition
if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
alwasy throws an ApplicationException, where I also really don't know why this happens.codecContext
andinputFrame
are my AVCodecContext and AVFrame objects, respectively, and the fields are defined as the following :


codecContext



codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = textureWidth;
codecContext->height = textureHeight;

AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = (int)fps;
codecContext->time_base = timeBase;
videoAVStream->time_base = timeBase;

AVRational frameRate = new AVRational();
frameRate.num = (int)fps;
frameRate.den = 1;
codecContext->framerate = frameRate;

codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;




inputFrame



inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = textureWidth;
inputFrame->height = textureHeight;
inputFrame->linesize[0] = inputFrame->width;




Any help in fixing the issue would be greatly appreciated :)


-
FFMPEG- Streaming Stops after few seconds
25 mars 2019, par Manjot Singh KalsiHi there dear community
This is the thing that I am dealing with, since last few days. After thorough search in the ffmpeg community, I was unable to find the solution. I am unable to Stream local flv, to facebook
rtmp
server.I am running the following command to stream my local flv video to the
rtmp
server of Facebook for Live-Streaming of my local File.```ffmpeg -re -i SampleM.flv -acodec libmp3lame -ar 44100 -b:a 128k -pix_fmt yuv420p -profile:v baseline -s 426x240 -bufsize 6000k -vb 400k -maxrate 1500k -deinterlace -vcodec libx264 -preset veryfast -g 30 -r 30 -f flv "rtmp://live-api.facebook.com:80/rtmp/my_key"```
It has been a misfortune-situation that even after reading
ffmpeg
documentation, I have been unable to find the issue that is leading me to this situation as follows.I am still missing something that i need to know.
Following is the log to the execution of the above command.
```ffmpeg version N-91024-g293a6e8332 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
libavutil 56. 18.100 / 56. 18.100
libavcodec 58. 19.101 / 58. 19.101
libavformat 58. 13.102 / 58. 13.102
libavdevice 58. 4.100 / 58. 4.100
libavfilter 7. 21.100 / 7. 21.100
libswscale 5. 2.100 / 5. 2.100
libswresample 3. 2.100 / 3. 2.100
libpostproc 55. 2.100 / 55. 2.100
Input #0, flv, from '.\video.flv':
Metadata:
audiodelay : 0
canSeekToEnd : 1
creationdate : Fri Feb 03 11:52:46 2006
:
Duration: 00:00:16.92, start: 0.000000, bitrate: 316 kb/s
Stream #0:0: Audio: mp3, 22050 Hz, stereo, fltp, 40 kb/s
Stream #0:1: Video: vp6f, 1 reference frame, yuv420p, 360x288 (368x288), 266 kb/s, 25 fps, 25 tbr, 1k tbn
Stream mapping:
Stream #0:1 -> #0:0 (vp6f (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (mp3 (mp3float) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
[graph_1_in_0_0 @ 0000020c0bcc4200] tb:1/22050 samplefmt:fltp samplerate:22050 chlayout:0x3
[format_out_0_1 @ 0000020c0bca2cc0] auto-inserting filter 'auto_resampler_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_1'
[auto_resampler_0 @ 0000020c0bca5140] ch:2 chl:stereo fmt:fltp r:22050Hz -> ch:2 chl:stereo fmt:fltp r:44100Hz
[graph 0 input from stream 0:1 @ 0000020c0c4f4600] w:360 h:288 pixfmt:yuv420p tb:1/1000 fr:25/1 sar:0/1 sws_param:flags=2
[scaler_out_0_0 @ 0000020c0c4f73c0] w:426 h:240 flags:'bicubic' interl:0
[scaler_out_0_0 @ 0000020c0c4f73c0] w:360 h:288 fmt:yuv420p sar:0/1 -> w:426 h:240 fmt:yuv420p sar:0/1 flags:0x4
[libx264 @ 0000020c0bc80600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000020c0bc80600] profile Constrained Baseline, level 3.0
[libx264 @ 0000020c0bc80600] 264 - core 155 r2901 7d0ff22 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=7 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=30 keyint_min=3 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=1500 vbv_bufsize=6000 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'rtmp://live-api.facebook.com:80/rtmp/my key:
Metadata:
audiodelay : 0
canSeekToEnd : 1
creationdate : Fri Feb 03 11:52:46 2006
:
encoder : Lavf58.13.102
Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 426x240, q=-1--1, 400 kb/s, 30 fps, 1k tbn, 30 tbc
Metadata:
encoder : Lavc58.19.101 libx264
Side data:
cpb: bitrate max/min/avg: 1500000/0/400000 buffer size: 6000000 vbv_delay: -1
Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, delay 1105, 128 kb/s
Metadata:
encoder : Lavc58.19.101 libmp3lame
No more output streams to write to, finishing.e=00:00:16.51 bitrate= 533.3kbits/s speed=0.992x
[flv @ 0000020c0bc97fc0] Failed to update header with correct duration.
[flv @ 0000020c0bc97fc0] Failed to update header with correct filesize.
frame= 424 fps= 25 q=-1.0 Lsize= 1153kB time=00:00:16.95 bitrate= 557.1kbits/s speed=0.997x
video:869kB audio:265kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.666647%
Input file #0 (.\video.flv):
Input stream #0:0 (audio): 649 packets read (84767 bytes); 649 frames decoded (373824 samples);
Input stream #0:1 (video): 424 packets read (566376 bytes); 424 frames decoded;
Total: 1073 packets (651143 bytes) demuxed
Output file #0 (rtmp://live-api.facebook.com:80/rtmp/my key):
Output stream #0:0 (video): 424 frames encoded; 424 packets muxed (889641 bytes);
Output stream #0:1 (audio): 649 frames encoded (747648 samples); 650 packets muxed (271673 bytes);
Total: 1074 packets (1161314 bytes) muxed
[libx264 @ 0000020c0bc80600] frame I:18 Avg QP:27.75 size: 7001
[libx264 @ 0000020c0bc80600] frame P:406 Avg QP:32.67 size: 1879
[libx264 @ 0000020c0bc80600] mb I I16..4: 39.3% 0.0% 60.7%
[libx264 @ 0000020c0bc80600] mb P I16..4: 8.3% 0.0% 2.3% P16..4: 42.2% 16.2% 4.4% 0.0% 0.0% skip:26.6%
[libx264 @ 0000020c0bc80600] final ratefactor: 28.61
[libx264 @ 0000020c0bc80600] coded y,uvDC,uvAC intra: 40.2% 33.7% 9.2% inter: 23.8% 7.5% 0.2%
[libx264 @ 0000020c0bc80600] i16 v,h,dc,p: 24% 50% 20% 6%
[libx264 @ 0000020c0bc80600] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 34% 18% 5% 5% 4% 8% 4% 8%
[libx264 @ 0000020c0bc80600] i8c dc,h,v,p: 67% 24% 7% 2%
[libx264 @ 0000020c0bc80600] kb/s:418.33```This Image shows that The stream was alive few Few Seconds on Facebook
```[flv @ 0000020c0bc97fc0] Failed to update header with correct duration.
[flv @ 0000020c0bc97fc0] Failed to update header with correct filesize.```Please correct me, the above error listed in the log, seems to be the main cause that the video stops streaming after few seconds. I’ve checked for the latency issues, But they won’t help anyway.
Please Help me to tackle this issue. I’ll be very much thankful. :’)
Streaming ends even earlier, when I use Google compute engine, instead my own PC as streaming service.
-
Method to generate a .ts file with PMT update (PCR pid) using ffmpeg or other tools
6 juin 2018, par C.B. Akshay Kumaris there a way to generate a ts file with PMT update of PCR PID using FFmpeg or any other tools ? I tried to combine two streams of the same file with their ’streamids’ (using FFmpeg’s concat filter) changed but to no avail. Is there any way to generate a stream according to my requirements ?
Here are the sample streams that I used :
Input #0, mpegts, from 'trial.ts':
Duration: 00:02:00.12, start: 1.400000, bitrate: 2162 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, top first), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101](deu): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 384 kb/s
Input #0, mpegts, from 'trial2.ts':
Duration: 00:02:00.12, start: 1.400000, bitrate: 2162 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x32]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, top first), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x33](deu): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 384 kb/sNote that only the AV PIDs for the streams have been changed. After using concat filter, I was able to combine the two streams, but I lost the PID information of the second .tsfile. Thus, there was no change in PID information as required usually for a PMT update.