
Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (37)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (6877)
-
ffmpeg yuvj422p color movie conversion avi2ogv
28 juillet 2017, par 7ToninWhile converting avi video to ogv, there is a color problem in output file.
How can I solve this issue ?
normal colors altered colorsActually a part of the problem is from the player - so weak question
Command using ffmpeg-3.3.2-1.mga6.tainted :
ffmpeg -i dscn0146.avi -pix_fmt yuv422p -s 640x480 dscn0146_hq.ogv -y
And input metadata :
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, avi, from 'dscn0146.avi':
Metadata:
encoder :
maker : NIKON
model : COOLPIX S3500
creation_time : 2017-07-22 12:09:06
Duration: 00:00:07.33, start: 0.000000, bitrate: 11091 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 10770 kb/s, 30 fps, 30 tbr, 30 tbn, 30 tbc
Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 22050 Hz, mono, s16, 352 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> theora (libtheora))
Stream #0:1 -> #0:1 (pcm_s16le (native) -> vorbis (libvorbis))Processes normaly, but fires a warning :
[swscaler @ 0xd3c3a0] deprecated pixel format used, make sure you did set range correctly
Output metadata :
Output #0, ogv, to 'dscn0146_hq.ogv':
Metadata:
model : COOLPIX S3500
maker : NIKON
encoder : Lavf57.71.100
Stream #0:0: Video: theora (libtheora), yuv422p(progressive), 640x480, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
Metadata:
encoder : Lavc57.89.100 libtheora
model : COOLPIX S3500
maker : NIKON
Stream #0:1: Audio: vorbis (libvorbis), 22050 Hz, mono, fltp
Metadata:
encoder : Lavc57.89.100 libvorbis
model : COOLPIX S3500
maker : NIKON -
Encoding of two full hd streams in Linux + GPU with Intel HD4000 / VA API / FFMPEG / OpenGL
27 juin 2017, par qknighti want to encode/stream two full hd streams in realtime from my laptop to a remote location using linux/xorg on the host.
VA API
for this i’ve been playing with the VA API but the performance is pretty bad with 5.59 fps (see paste below).
FFMPEG
using ffmpeg with CPU encoding i get about 200 fps but then all cores of my Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz are busy and the fan turns on.
future plans
i want GPU support in encoding and later integrate this into a program which streams a virtual xorg ’screen’, see https://lastlog.de/wiki/index.php/Raspberry_PI_virtual_screen for more details on my plans.
maybe h264 isn’t even what i want ? so if someone advices towards a different implementation, i’d welcome that.
besides VA API there seems to be QuickSync but i didn’t experiment with that yet as it is not packaged on NixOS just yet.
note : i need a library to have a smooth integration into the code.
h264encode -w 1920 -h 1080 —profile MPSource frame is 1920x1080 and will code clip to 1920x1088 with crop
INPUT:Try to encode H264...
INPUT : Resolution : 1920x1080, 60 frames
INPUT : FrameRate : 30
INPUT : Bitrate : 14929920
INPUT : Slieces : 1
INPUT : IntraPeriod : 30
INPUT : IDRPeriod : 60
INPUT : IpPeriod : 1
INPUT : Initial QP : 26
INPUT : Min QP : 0
INPUT : Source YUV : AUTO generated
INPUT : Coded Clip : /tmp/test.264
INPUT : Rec Clip : Not save reconstructed framelibva info : VA-API version 0.38.1
libva info : va_getDriverName() returns 0
libva info : Trying to open /run/opengl-driver/lib/dri/i965_drv_video.so
libva info : Found init function __vaDriverInit_0_38
libva info : va_openDriver() returns 0
Use profile VAProfileH264Main
Support rate control mode (0x12):CBR CQP
RateControl mode : CQP
Support VAConfigAttribEncPackedHeaders
Support packed sequence headers
Support packed picture headers
Support packed slice headers
Support packed misc headers
Support 1 RefPicList0 and 1 RefPicList1
Loading data into surface 15.....Complete surface loading
\00000059(054456 bytes coded)PERFORMANCE : Frame Rate : 5.59 fps (60 frames, 10730 ms (178.83 ms per frame))
PERFORMANCE : Compression ratio : 51:1
PERFORMANCE : UploadPicture : 10467 ms (174.45, 97.55% percent)
PERFORMANCE : vaBeginPicture : 0 ms (0.00, 0.00% percent)
PERFORMANCE : vaRenderHeader : 1 ms (0.02, 0.01% percent)
PERFORMANCE : vaEndPicture : 42 ms (0.70, 0.39% percent)
PERFORMANCE : vaSyncSurface : 244 ms (4.07, 2.27% percent)
PERFORMANCE : SavePicture : 7 ms (0.12, 0.07% percent)
PERFORMANCE : Others : -31 ms (71582787.75, 40027653.91% percent)
(Multithread enabled, the timing is only for reference)i’ve seen https://www.reddit.com/r/linux/comments/1qk1yu/is_there_currently_opensource_software_to_encode/ though but i’m not sure what do do with it.
-
Efficiently write a movie directly from np.array using pipes
16 juin 2017, par Matt BillmanI have a 4D numpy array of movie frames. I’m looking for a function to write them to a movie, at a given framerate. I have FFMPEG installed on my OS, and as I can see from these answers, the most efficient way to do so is via pipes.
However, I have very little experience using pipes, and the explanations in the link above make little sense to me. Furthermore, very few of the answers seem actually implement pipes, and the one that does uses mencoder, not FFMPEG. I am relatively inexperienced with FFMPEG, so am not sure how to modify the command string from the mencoder answer to make it work in FFMPEG.
WHAT I WOULD LIKE :
A function of the following form :
animate_np_array(4d_array, framerate) -> output.mp4 (or other video codec)
Which implements pipes to send frames one after the other to FFMPEG, and which I can copy-paste into my existing code.
Furthermore, it is absolutely necessary that this function never actually plots any of the frames, as calls to the matplotlib.imshow() function (as I have most typically seen used) slow things down considerably.