
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (34)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6793)
-
Leading Google Analytics alternative, Matomo, parodies Christopher Nolan blockbuster ahead of the UA sunset
4 juillet 2023, par Erin — Press Releases -
FFmpeg library is detecting pixel format of VP9 video stream not correctly
6 avril 2023, par PikachuI am using a C code to detect pixel format of VP9 video stream in a WebM container. FFmpeg version 6.0, full shared library build, downloaded from official website. Operating system is Windows 10. I feed the library with a VP9 video encoded with alpha channel, pixel format is YUVA420p. It detects pixel format as YUV420p.


I have found a similar question on StackOverflow.com, Is there a way to force FFMPEG to decode a video stream with alpha from a WebM video encoded with libvpx-vp9 ?, but it does not actually help.


When I override the decoder with a
libvpx
, it continues to detect the pixel format as YUV420p instead of YUVA420p.

C code is following. Note that error handling in code is omitted here for StackOverflow question to be shorter.


AVFormatContext *fmt_ctx = NULL;
int err = avformat_open_input(&fmt_ctx, infp, NULL, NULL);
err = avformat_find_stream_info(fmt_ctx, NULL);
int stream = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
AVCodecParameters *codecpar = fmt_ctx->streams[stream]->codecpar;

const AVCodec *codec = NULL;
if (codecpar->codec_id == AV_CODEC_ID_VP9) {
 codec = avcodec_find_decoder_by_name(CODEC_LIBVPX_VP9);
} else {
 codec = avcodec_find_decoder(codecpar->codec_id);
}

AVCodecContext *ctx = avcodec_alloc_context3(codec);
err = avcodec_parameters_to_context(ctx, codecpar);
av_log(NULL, AV_LOG_DEBUG, "Pixel format: %d.\n", ctx->pix_fmt); //TODO:DEBUG.
err = avcodec_open2(ctx, codec, NULL);



The program tells
Pixel format: 0.
, which meansAV_PIX_FMT_YUV420P
, not theAV_PIX_FMT_YUVA420P
!

If I override pixel format manually, I am able to decode video with alpha channel and to see the transparent background, but it breaks the logics, because when a real YUV420p pixel format comes in and gets overridden by YUVA420p, this will be a problem.


if (codecpar->codec_id == AV_CODEC_ID_VP9) {
 if (strcmp(codec->name, CODEC_LIBVPX_VP9) == 0) {
 if (ctx->pix_fmt == AV_PIX_FMT_YUV420P) {
 ctx->pix_fmt = AV_PIX_FMT_YUVA420P;
 }
 }
}



At the same time ffmpeg tool started from command line with libvpx decoder tells that my video has
YUVA420p
pixel format. Output is following.

D:\Temp\4>ffmpeg -c:v libvpx-vp9 -i yuva.webm
ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-shared --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 58. 2.100 / 58. 2.100
 libavcodec 60. 3.100 / 60. 3.100
 libavformat 60. 3.100 / 60. 3.100
 libavdevice 60. 1.100 / 60. 1.100
 libavfilter 9. 3.100 / 9. 3.100
 libswscale 7. 1.100 / 7. 1.100
 libswresample 4. 10.100 / 4. 10.100
 libpostproc 57. 1.100 / 57. 1.100
[libvpx-vp9 @ 000001ecdf6002c0] v1.13.0-71-g45dc0d34d
 Last message repeated 1 times
Input #0, matroska,webm, from 'yuva.webm':
 Metadata:
 ENCODER : Lavf60.3.100
 Duration: 00:00:05.55, start: 0.000000, bitrate: 227 kb/s
 Stream #0:0: Video: vp9 (Profile 0), yuva420p(tv, progressive), 1920x1080, SAR 1:1 DAR 16:9, 60 fps, 60 tbr, 1k tbn
 Metadata:
 alpha_mode : 1
 ENCODER : Lavc60.3.100 libvpx-vp9
 DURATION : 00:00:05.550000000
At least one output file must be specified



Here is my YUVA420p in the first video stream at the end of the console output :


Stream #0:0: Video: vp9 (Profile 0), yuva420p(tv, progressive), 1920x1080, SAR 1:1 DAR 16:9, 60 fps, 60 tbr, 1k tbn





The questions are following.


- 

- How to detect real pixel format of VP9 video with FFmpeg library in C code reliably ?
- Why is the C code not detecting the actual pixel format even with codec overriden to libvpx ?






Thank you.


-
TS video copied to MP4, missing 3 first frames when programmatically read (ffmpeg bug)
3 septembre 2023, par Vasilis LemonidisRunning :


ffmpeg -i test.ts -fflags +genpts -c copy -y test.mp4



for this test.ts, which has 30 frames, readable by opencv, I end up with 28 frames, out of which 27 are readable by opencv. More specifically :


ffprobe -v error -select_streams v:0 -count_packets -show_entries stream=nb_read_packets -of csv=p=0 tmp.ts 



returns 30.


ffprobe -v error -select_streams v:0 -count_packets -show_entries stream=nb_read_packets -of csv=p=0 tmp.mp4



returns 28.


Using OpenCV in that manner


cap = cv2.VideoCapture(tmp_path)
readMat = []
while cap.isOpened():
 ret, frame = cap.read()
 if not ret:
 break
 readMat.append(frame)



I get for the ts file 30 frames, while for the mp4 27 frames.


Could someone explain why the discrepancies ? I get no error during the transformation from ts to mp4 :


ffmpeg version N-111746-gd53acf452f Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 11.3.0 (GCC)
 configuration: --ld=g++ --bindir=/bin --extra-libs='-lpthread -lm' --pkg-config-flags=--static --enable-static --enable-gpl --enable-libaom --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libsvtav1 --enable-libdav1d --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-cuda-nvcc --enable-cuvid --enable-nvenc --enable-libnpp 
 libavutil 58. 16.101 / 58. 16.101
 libavcodec 60. 23.100 / 60. 23.100
 libavformat 60. 10.100 / 60. 10.100
 libavdevice 60. 2.101 / 60. 2.101
 libavfilter 9. 10.100 / 9. 10.100
 libswscale 7. 3.100 / 7. 3.100
 libswresample 4. 11.100 / 4. 11.100
 libpostproc 57. 2.100 / 57. 2.100
[mpegts @ 0x4237240] DTS discontinuity in stream 0: packet 5 with DTS 306003, packet 6 with DTS 396001
Input #0, mpegts, from 'tmp.ts':
 Duration: 00:00:21.33, start: 3.400000, bitrate: 15 kb/s
 Program 1 
 Metadata:
 service_name : Service01
 service_provider: FFmpeg
 Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 300x300, 1 fps, 3 tbr, 90k tbn
Output #0, mp4, to 'test.mp4':
 Metadata:
 encoder : Lavf60.10.100
 Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 300x300, q=2-31, 1 fps, 3 tbr, 90k tbn
Stream mapping:
 Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[out#0/mp4 @ 0x423e280] video:25kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.192123%
frame= 30 fps=0.0 q=-1.0 Lsize= 26kB time=00:00:21.00 bitrate= 10.3kbits/s speed=1e+04x 



Additional information


The origin of the video I am processing comes from a continuous stitching operation of still images ts videos, produced by this class
update
method :

import cv2
import os
import subprocess
from tempfile import NamedTemporaryFile
class VideoUpdater:
 def __init__(
 self, video_path: str, framerate: int, timePerFrame: Optional[int] = None
 ):
 """
 Video updater takes in a video path, and updates it using a supplied frame, based on a given framerate.
 Args:
 video_path: str: Specify the path to the video file
 framerate: int: Set the frame rate of the video
 """
 if not video_path.endswith(".mp4"):
 LOGGER.warning(
 f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
 )
 video_path = os.path.splitext(video_path)[0] + ".mp4"

 self._ps = None
 self.env = {
 
 }
 self.ffmpeg = "/usr/bin/ffmpeg "

 self.video_path = video_path
 self.ts_path = video_path.replace(".mp4", ".ts")
 self.tfile = None
 self.framerate = framerate
 self._video = None
 self.last_frame = None
 self.curr_frame = None


 def update(self, frame: np.ndarray):
 if len(frame.shape) == 2:
 frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
 else:
 frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
 self.writeFrame(frame)

 def writeFrame(self, frame: np.ndarray):
 """
 The writeFrame function takes a frame and writes it to the video file.
 Args:
 frame: np.ndarray: Write the frame to a temporary file
 """


 tImLFrame = NamedTemporaryFile(suffix=".png")
 tVidLFrame = NamedTemporaryFile(suffix=".ts")

 cv2.imwrite(tImLFrame.name, frame)
 ps = subprocess.Popen(
 self.ffmpeg
 + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
 env=self.env,
 shell=True,
 stdout=subprocess.PIPE,
 stderr=subprocess.PIPE,
 )
 ps.communicate()
 if os.path.isfile(self.ts_path):
 # this does not work to watch, as timestamps are not updated
 ps = subprocess.Popen(
 self.ffmpeg
 + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
 env=self.env,
 shell=True,
 stdout=subprocess.PIPE,
 stderr=subprocess.PIPE,
 )
 ps.communicate()
 shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

 else:
 shutil.copyfile(tVidLFrame.name, self.ts_path)
 # fixing timestamps, we dont have to wait for this operation
 ps = subprocess.Popen(
 self.ffmpeg
 + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
 env=self.env,
 shell=True,
 # stdout=subprocess.PIPE,
 # stderr=subprocess.PIPE,
 )
 tImLFrame.close()
 tVidLFrame.close()