
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (52)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (9936)
-
How to find Video Codec for file ffmpeg
10 janvier 2023, par iggy12345I'm working with HLS streams of MPEGTS that contains H264.


When I look at the HLS Playlist given by the server, it specifies that the codec is
avc1.77.30,mp4a.40.2


Here's a snippet from the file


#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1787760,CODECS="avc1.77.30,mp4a.40.2",RESOLUTION=640x360



Is there a way I can retrieve this information from
ffmpeg
?

I found a question that was asked before How to determine video codec of a file with FFmpeg


But the closest I can seem to get to any of their outputs is
h264
and notavc1.77.30


The playlist contains 7 MPEG-TS files, I've tried using
ffprobe
on both the playlist file, as well as the individual video files, but they both seem to report the parser, not the encoding of the actual video.

-
FFMpeg fails to detect input stream when outputting to pipe's stdout
27 septembre 2020, par La bla blaWe have h264 frames as individual files, we read them to a python wrapper and piping them to ffmpeg.


ffmpeg subprocess is launched using


command = ["ffmpeg",
 "-hide_banner",
 "-vcodec", "h264",
 "-i", "pipe:0",
 "-video_size", "5120x3072",
 '-an', '-sn', # we want to disable audio processing (there is no audio)
 '-pix_fmt', 'bgr24',
 "-vcodec", "rawvideo",
 '-f', 'image2pipe', '-']
 pipe = sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, bufsize=10 ** 8)



Our goal is to use ffmpeg to convert the individual h264 frames into raw BGR data that we can manipulate using OpenCV.


the files are read in a background thread and piped using


...
 for path in files:
 with open(path, "rb") as f:
 data = f.read()
 pipe.stdin.write(data)



When we try to read the ffmpeg's output pipe using


while True:
 # Capture frame-by-frame
 raw_image = pipe.stdout.read(width * height * 3)



we get


[h264 @ 0x1c31000] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pipe:0: could not find codec parameters
Input #0, h264, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
Output file #0 does not contain any stream



However, when I change the
sp.Popen
command to be


 f = open('ffmpeg_output.log', 'wt')
 pipe = sp.Popen(command, stdin=sp.PIPE, stdout=f, bufsize=10 ** 8) # Note: the stdout is not f



we get the gibberish (i.e, binary data) in the
ffmpeg_output.log
file, and the console reads

[h264 @ 0xf20000] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0xf20000] decoding for stream 0 failed
Input #0, h264, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (Baseline), yuv420p, 5120x3072, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
 Metadata:
 encoder : Lavf56.40.101
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 5120x3072, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
 Metadata:
 encoder : Lavc56.60.100 rawvideo
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Invalid UE golomb code
 Last message repeated 89 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code
 Last message repeated 29 times
Invalid UE golomb code



Why does ffmpeg cares if its stdout is a file or a pipe ?


-
Merging audio channels by FFmpeg libraries C++
13 juin 2023, par ElijaC/C++ project. There are several audio tracks with an individual number of channels. Each is processed using a avresampler. The avresampler converts all tracks to the same format. At the output of the avresampler, we get a set of channels from all audio tracks. Do the FFmpeg libraries have any standard means to merge all these audio channels into another set of audio channels using AVFrame::ch_layout to form a single audio track ? That is, no additional processing is required, except for copying channels buffers from several AVFrames of different audio tracks to new positions of one AVFrame according to the new layout and aligning the timestamps of different audio tracks, since different audio tracks can have audio frames of different sizes with different timestamps.


For example, 1 audio track with 2 channels and 2 audio tracks with 6 channels will be merged into 16 channels with silence in the "missing" channels. Or 4 audio tracks with 2 channels merged into 8 channels. Etc...


Is there a way to use a avresampler for this ? Or a avfilter ? Or something else ?


Any source code or any reading source that can help me is welcome.