
Recherche avancée
Autres articles (57)
-
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (9777)
-
gstreamer vaapih264enc generated ts segment dont play on AVPlayer
21 juin 2022, par Guru GovindanI have a pipeline that transcodes an rtsp stream into hls segments. The manifest is playable in browsers(with hlsjs) and ffplay and VLC. However when I play the stream or individual ts segments in Quicktime player or my IOS App that uses avplayer, it doesnt work.


This seems to be an issue only with
vaapih264enc
. When I try the same with software encode likex264enc
it works fine.

The following is a simple pipeline where this is reproducible.


GST_DEBUG=3 gst-launch-1.0 rtspsrc location='rtsp://user:pass@10.10.10.12:554/' name=rtpsrc0 \
 rtpsrc0. ! rtph264depay ! queue ! decodebin ! vaapih264enc ! mpegtsmux name=mux ! filesink location=mymux.ts \
 rtpsrc0. ! decodebin ! queue ! fdkaacenc ! mux.



when I run the following command to copy the encoded stream as is from the generated ts segment it works


ffmpeg -i mymux.ts -c copy mymux_ffmpeg.ts



Is the ffmpeg adding some additional header information that the quicktime player is happy with ?


I appreciate any help with this.


-
How to play video Media Source Extensions when the audio start is delayed ? Or how to fix it with ffmpeg ?
11 décembre 2020, par sheodoxI have a video that I'm splitting the individual video/audio streams out then dashing with MP4Box, then I'm playing them with Media Source Extensions and appending byte ranges to video/audio source buffers from the MPD files. It's all working nicely, but one video I have has audio that is delayed by about 1.1 second. I couldn't get it to sync up and the audio would always play ahead of the video.


Currently I'm trying to set the
audioBuffer.timestampOffset = 1.1
and that gets it to sync up perfectly. The issue I'm running into now though is the video refuses to play unless the audio source buffer has data. So the video stalls right away. If I skip a few seconds in (past the offset) everything works because both video/audio are buffered.

Is there a way to get around this ? Either make it play without the audio loaded, somehow fill the audio buffer with silence (can I generate something with the Web Audio API) ? Add silence to the audio file in ffmpeg ? Something else ?


I first tried adding a delay in ffmpeg with
ffmpeg -i video.mkv -map 0:a:0 -acodec aac -af "adelay=1.1s:all=true" out.aac
but nothing seemed to change. Was I doing something wrong ? Is there a better way to demux audio while keeping the exact same timing as when it was in the container with the video so I don't have to worry about delays/offsets at all ?

-
Ffmpeg - Generate VTT File From Sprite, Using Spatial Media Fragment
20 octobre 2019, par DavidHi I am looking to create a .VTT file from a sprite that i have generated using Ffmpeg.
Ffmpeg command :
$"-i {inputMediaFile} -vf \"select = not(mod(n\\, 30)),scale = 120:80,tile = 7x7\" -an -vsync 0 {outputMediaFile}"
This selects every 30th frame, and then scales it to 120x80 pixels and creates 8x8 tiles in the output image.
I would like to make a .VTT from the generated image in C#, so i know the height and width of my individual images in the sprite (120x80) and there is 64 images in total in the output image.
From this i need to produce a VTT like this :
WEBVTT
1
00:00:00.000 --> 00:00:01.000
test-00001.jpg#xywh=0,0,120,80
2
00:00:01.000 --> 00:00:02.000
test-00001.jpg#xywh=120,0,120,80
3
00:00:02.000 --> 00:00:03.000
test-00001.jpg#xywh=240,0,120,80
4
00:00:03.000 --> 00:00:04.000
test-00001.jpg#xywh=360,0,120,80
5
00:00:04.000 --> 00:00:05.000
test-00001.jpg#xywh=480,0,120,80
6
00:00:05.000 --> 00:00:06.000
test-00001.jpg#xywh=600,0,120,80
7
00:00:06.000 --> 00:00:07.000
test-00001.jpg#xywh=720,0,120,80
8
00:00:07.000 --> 00:00:08.000
test-00001.jpg#xywh=840,0,120,80
9
00:00:08.000 --> 00:00:09.000
test-00001.jpg#xywh=0,80,120,80There is also situations when there is n amount of sprite files.
Im hoping there may be a library out there that can handle this, or even better if i can keep it contained within Ffmpeg - based on Ffmpeg docs i dont think this is possible though.
Thanks in advance if anyone as any ideas, its doable as ive seen Nodejs and Ruby examples.