Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (103)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (8464)

  • Extracting multiple video streams using FFmpeg

    11 avril 2023, par Ashutosh Singla

    I have a video file that contains 4 streams, 3 videos streams and one steam for metadata.

    


    Stream Info :

    


    Input #0, matroska,webm, from 'output_master.mkv':
Metadata:
title           : Azure Kinect
encoder         : libmatroska-1.4.9
creation_time   : 2021-05-20T12:11:15.000000Z
K4A_DEPTH_DELAY_NS: 0
K4A_WIRED_SYNC_MODE: MASTER
K4A_COLOR_FIRMWARE_VERSION: 1.6.110
K4A_DEPTH_FIRMWARE_VERSION: 1.6.79
K4A_DEVICE_SERIAL_NUMBER: 000123102712
K4A_START_OFFSET_NS: 298800000
Duration: 00:00:40.03, start: 0.000000, bitrate: 480934 kb/s

Stream #0:0(eng): Video: mjpeg (Baseline) (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 2048x1536, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
  title           : COLOR
  K4A_COLOR_TRACK : 14499183330009048
  K4A_COLOR_MODE  : MJPG_1536P
Stream #0:1(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
  title           : DEPTH
  K4A_DEPTH_TRACK : 429408169412322196
  K4A_DEPTH_MODE  : NFOV_UNBINNED
Stream #0:2(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
  title           : IR
  K4A_IR_TRACK    : 194324406376800992
  K4A_IR_MODE     : ACTIVE
Stream #0:3: Attachment: none
Metadata:
  filename        : calibration.json
  mimetype        : application/octet-stream
  K4A_CALIBRATION_FILE: calibration.json


    


    I am using this command to extract the first stream :

    


    ffmpeg -i output_master.mkv -c copy  -map 0:v:0 out_1.mkv


    


    For the other two streams, I am using this command :

    


    ffmpeg -i output_master.mkv -c:v ffv1 -pix_fmt gray16be -allow_raw_vfw 1 -map 0:v:1 out_2.mkv
ffmpeg -i output_master.mkv -c:v ffv1 -pix_fmt gray16be -allow_raw_vfw 1 -map 0:v:2 out_3.mkv


    


    I do not know if I am using the right commands to extracting the video streams.

    


  • Extracting Metadata from a video file (.mkv) using FFmpeg

    14 avril 2023, par Ashutosh Singla

    I have a video file that contains 4 streams, 3 videos streams and one steam for metadata.

    


    Stream Info :

    


    Input #0, matroska,webm, from 'output_master.mkv':
Metadata:
title           : Azure Kinect
encoder         : libmatroska-1.4.9
creation_time   : 2021-05-20T12:11:15.000000Z
K4A_DEPTH_DELAY_NS: 0
K4A_WIRED_SYNC_MODE: MASTER
K4A_COLOR_FIRMWARE_VERSION: 1.6.110
K4A_DEPTH_FIRMWARE_VERSION: 1.6.79
K4A_DEVICE_SERIAL_NUMBER: 000123102712
K4A_START_OFFSET_NS: 298800000
Duration: 00:00:40.03, start: 0.000000, bitrate: 480934 kb/s

Stream #0:0(eng): Video: mjpeg (Baseline) (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 2048x1536, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
  title           : COLOR
  K4A_COLOR_TRACK : 14499183330009048
  K4A_COLOR_MODE  : MJPG_1536P
Stream #0:1(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
  title           : DEPTH
  K4A_DEPTH_TRACK : 429408169412322196
  K4A_DEPTH_MODE  : NFOV_UNBINNED
Stream #0:2(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
  title           : IR
  K4A_IR_TRACK    : 194324406376800992
  K4A_IR_MODE     : ACTIVE
Stream #0:3: Attachment: none
Metadata:
  filename        : calibration.json
  mimetype        : application/octet-stream
  K4A_CALIBRATION_FILE: calibration.json


    


    I am using the below command to extract Stream #0:0, Stream #0:1, and Stream #0:2 by changing map 0:X.

    


    ffmpeg -i output_master.mkv -c copy -allow_raw_vfw 1 -map 0:0 temp_0.mkv 


    


    To extract the metadata from all the streams and store them in metadata.txt, I am using the command below :

    


    ffprobe -v quiet -show_format -show_streams -print_format json output_master.mkv > metadata.txt


    


    What should be the command to extract Stream #0:3 ? Any help would be appreciated.

    


  • How to stream live h.264 (IP camera) video to browser ? (bonus : low bandwidth and latency)

    4 octobre 2018, par Ryan Griggs

    I need to stream live h.264-encoded video from an IP camera to the browser, while supporting all common browsers and mobile devices (i.e. Android, Firefox, Chrome, IE, Safari (Mac OS and iOS)), and while keeping bandwidth requirements and latency to a minimum.

    MPEG-DASH requires browser support for Media Source Extensions, which are NOT supported by iOS. So that’s out.

    HLS is only supported by Safari and Edge.

    Also DASH seems to impose a latency of several seconds, which is not preferable.

    I would like to be able to chunk the incoming h.264 data (i.e. fragmented MP4), pass the chunked data to the browser via Websockets, then dump the chunks into some sort of player as they arrive.

    Broadway and its forks are a javascript h.264 decoder, and there is a Broadway-stream project that supports streams instead of files, but the docs are poor and I can only find examples of streaming when the source is not live.

    The most pressing question is : how do I hand the "chunked data" to a player or Video HTML element as it arrives at the browser ?

    I think the ideal setup would be to

    1. Use ffmpeg to transcode the original video to a chunked format (fMP4)
    2. Pipe the chunked output to a Node JS app which emits each chunk out through a Websocket to all connected viewers
    3. Viewers’ browsers dump each incoming chunk into some sort of decoder which renders the video.

    I’m clear up to the point of handing the received chunks to a video decoder. How can that be done without depending on Media Source Extensions, and allowing viewers to join the stream at random times ?