
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (42)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5287)
-
Why frame->pts increases by 20, rather than by 1 ?
19 mars 2013, par user1914692Following the exmaples of ffmpeg : decoding_encoding.c and filtering_video.c, I process one video file taken by iPhone. The video file : .mov, video dimensions ; 480x272, video Codec : H.264/AVC, 30 frames per second, bitrate : 605 kbps.
I first extract each frame, which is YUV.
I convert YUV to RGB24, and process the RGB24, then write the RGB24 to a .ppm file. It shows the .ppm file is correct.Then I plan to encode processed RGB24 frames to a video file.
Since MPEG does not support RGB24 picture format, I used AV_CODEC_ID_HUFFYUV.
But the output video file (showing 18.5 MB) does not play. Movie Player on Ubuntu claims an error : Could not determine type of stream.
I also tried it on VCL. It simply does not work, without any error information.My second questions is :
For each extracted fram from the input video file, I get its pts as follows according to filtering_video.c :frame->pts = av_frame_get_best_effort_timestamp(frame);
I print out each frame's pts, and find that it increases by 20, like below :
pFrameRGB_count: 0, frame->pts: 0
pFrameRGB_count: 1, frame->pts: 20
pFrameRGB_count: 2, frame->pts: 40
pFrameRGB_count: 3, frame->pts: 60Where frame is the extracted frame from the input video, and pFrameRGB_count is the count for processed frame in RGB24 form.
Why are they wrong ?
-
ffmpeg stream video file from ubuntu to youtube
14 mars 2018, par user3010452I’m trying to create a stream to youtube. I could see how preview button changes into enable state. However it never actually changes from offline.
And it gives me several error. What am I doing wrong ?ffmpeg -i video.flv -f flv rtmp://a.rtmp.youtube.com/live2/XXXXXX
ffmpeg version 2.8.11-0ubuntu0.16.04.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, flv, from 'video.flv':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
com.apple.quicktime.creationdate: 2017-07-20T21:44:12+0700
com.apple.quicktime.make: Apple
com.apple.quicktime.model: iPhone 6s Plus
com.apple.quicktime.software: 10.3.2
encoder : Lavf57.83.100
Duration: 00:01:15.24, start: 0.000000, bitrate: 4454 kb/s
Stream #0:0: Video: flv1, yuv420p, 1920x1080, 200 kb/s, 29.97 fps, 29.97 tbr, 1k tbn, 1k tbc
Stream #0:1: Audio: adpcm_swf, 44100 Hz, mono, s16, 176 kb/s
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/XXXXXX':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
com.apple.quicktime.creationdate: 2017-07-20T21:44:12+0700
com.apple.quicktime.make: Apple
com.apple.quicktime.model: iPhone 6s Plus
com.apple.quicktime.software: 10.3.2
encoder : Lavf56.40.101
Stream #0:0: Video: flv1 (flv) ([2][0][0][0] / 0x0002), yuv420p, 1920x1080, q=2-31, 200 kb/s, 29.97 fps, 1k tbn, 29.97 tbc
Metadata:
encoder : Lavc56.60.100 flv
Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, mono, s16p
Metadata:
encoder : Lavc56.60.100 libmp3lame
Stream mapping:
Stream #0:0 -> #0:0 (flv1 (flv) -> flv1 (flv))
Stream #0:1 -> #0:1 (adpcm_swf (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
[flv @ 0x162bac0] Failed to update header with correct duration.ate=4125.4kbits/s
[flv @ 0x162bac0] Failed to update header with correct filesize.
frame= 2255 fps=114 q=31.0 Lsize= 37863kB time=00:01:15.24 bitrate=4122.0kbits/s
video:37194kB audio:588kB subtitle:0kB other streams:0kB globalheaders:0kB mixing overhead : 0.213941%
-
ffmpeg : Continiously encode and append base64 data chunks into output file
11 février 2021, par O.OI have a
.mov
file thats being written into by my iphone cam saved asinput.mov
and I have a script that's reading the currently updating file and I am trying to encode the video and audio codec into a.mkv
container.

I have little knowledge of this tool, but looking at similar Q/A's around
ffmpeg
usage I have found little on using base64 as input. But it is documented by ffmpeg for images, so I assume it is possible and I have also useddata:video/mp4
since these file types are very similar.

I have :


const ifRecordingStream = await fs.readStream('input.mov', 'base64', 4095);
ifRecordingStream.open();

ifRecordingStream.onData((chunk) => 
 execute(`ffmpeg -f concat -i "data:video/mp4;base64,${chunk} -c:v h264 -c:a aac output.mkv")
);



onData()
currently throwsLine {}: unknown keyword {}


Is my command wrong ?


ffmpeg -f concat -i "data:video/mp4;base64,${chunk}" -c:v h264 -c:a aac output.mkv


Any help at all is welcomed.