
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (27)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4851)
-
FFMpeg fails to detect input stream when outputting to pipe's stdout
27 septembre 2020, par La bla blaWe have h264 frames as individual files, we read them to a python wrapper and piping them to ffmpeg.


ffmpeg subprocess is launched using


command = ["ffmpeg",
 "-hide_banner",
 "-vcodec", "h264",
 "-i", "pipe:0",
 "-video_size", "5120x3072",
 '-an', '-sn', # we want to disable audio processing (there is no audio)
 '-pix_fmt', 'bgr24',
 "-vcodec", "rawvideo",
 '-f', 'image2pipe', '-']
 pipe = sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, bufsize=10 ** 8)



Our goal is to use ffmpeg to convert the individual h264 frames into raw BGR data that we can manipulate using OpenCV.


the files are read in a background thread and piped using


...
 for path in files:
 with open(path, "rb") as f:
 data = f.read()
 pipe.stdin.write(data)



When we try to read the ffmpeg's output pipe using


while True:
 # Capture frame-by-frame
 raw_image = pipe.stdout.read(width * height * 3)



we get


[h264 @ 0x1c31000] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pipe:0: could not find codec parameters
Input #0, h264, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
Output file #0 does not contain any stream



However, when I change the
sp.Popen
command to be


 f = open('ffmpeg_output.log', 'wt')
 pipe = sp.Popen(command, stdin=sp.PIPE, stdout=f, bufsize=10 ** 8) # Note: the stdout is not f



we get the gibberish (i.e, binary data) in the
ffmpeg_output.log
file, and the console reads

[h264 @ 0xf20000] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0xf20000] decoding for stream 0 failed
Input #0, h264, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (Baseline), yuv420p, 5120x3072, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
 Metadata:
 encoder : Lavf56.40.101
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 5120x3072, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
 Metadata:
 encoder : Lavc56.60.100 rawvideo
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Invalid UE golomb code
 Last message repeated 89 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code



Why does ffmpeg cares if its stdout is a file or a pipe ?


-
rtmpproto : Validate the embedded flv packet size before copying
3 octobre 2013, par Martin Storsjörtmpproto : Validate the embedded flv packet size before copying
This wasn’t an issue prior to 58404738, when the whole RTMP packet
was copied at once and the length of the individual embedded flv
packets only were validated by the flv demuxer.Prior to this patch, this could lead to reads and writes out of bound.
Signed-off-by : Martin Storsjö <martin@martin.st>
-
How do I preserve side data when concatenating files in ffmpeg ?
28 mai 2020, par Mark KahnI have multiple 360 videos that I'm trying to concatenate in ffmpeg. The command it self is pretty straightforward :



ffmpeg -f concat -i 0036_concat.txt -c copy -strict unofficial 36.mp4




where
0036_concat.txt
is just a list of the individual files. The issue I'm having is that I can't get ffmpeg to preserve side data. Very simply put,ffprobe
on any of the source files includes this :


Side data:
 spherical: equirectangular (0.000000/0.000000/0.000000)




And I can't, for the life of me, get that to propagate to the output file.



this question has a solution that works for single files, but it doesn't work when concatenating multiple files.



I'd be perfectly fine injecting that entire string if anyone knows how.