Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (79)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

Sur d’autres sites (9208)

  • Don’t use expressions with side effects in macro parameters

    28 juillet 2016, par Martin Storsjö
    Don’t use expressions with side effects in macro parameters
    

    AV_WB32 can be implemented as a macro that expands its parameters
    multiple times (in case AV_HAVE_FAST_UNALIGNED isn’t set and the
    compiler doesn’t support GCC attributes) ; make sure not to read
    multiple times from the source in this case.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavcodec/dxv.c
    • [DBH] libavformat/xmv.c
  • .MKV Video File not playing on Azure Kinect Viewer

    17 avril 2023, par Ashutosh Singla

    I have a video file that contains 4 streams, 3 videos streams and one steam for metadata. I want to extract these streams first with metadata and then want to combine them together with metadata. I would like to play the video on Azure Kinect Viewer so that I can see if I am doing anything wrong while extracting and copying the stream.

    &#xA;

    Original Stream Info :

    &#xA;

    Input #0, matroska,webm, from &#x27;output_master.mkv&#x27;:&#xA;Metadata:&#xA;title           : Azure Kinect&#xA;encoder         : libmatroska-1.4.9&#xA;creation_time   : 2021-05-20T12:11:15.000000Z&#xA;K4A_DEPTH_DELAY_NS: 0&#xA;K4A_WIRED_SYNC_MODE: MASTER&#xA;K4A_COLOR_FIRMWARE_VERSION: 1.6.110&#xA;K4A_DEPTH_FIRMWARE_VERSION: 1.6.79&#xA;K4A_DEVICE_SERIAL_NUMBER: 000123102712&#xA;K4A_START_OFFSET_NS: 298800000&#xA;Duration: 00:00:40.03, start: 0.000000, bitrate: 480934 kb/s&#xA;&#xA;Stream #0:0(eng): Video: mjpeg (Baseline) (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 2048x1536, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1000k tbn (default)&#xA;Metadata:&#xA;  title           : COLOR&#xA;  K4A_COLOR_TRACK : 14499183330009048&#xA;  K4A_COLOR_MODE  : MJPG_1536P&#xA;Stream #0:1(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)&#xA;Metadata:&#xA;  title           : DEPTH&#xA;  K4A_DEPTH_TRACK : 429408169412322196&#xA;  K4A_DEPTH_MODE  : NFOV_UNBINNED&#xA;Stream #0:2(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)&#xA;Metadata:&#xA;  title           : IR&#xA;  K4A_IR_TRACK    : 194324406376800992&#xA;  K4A_IR_MODE     : ACTIVE&#xA;Stream #0:3: Attachment: none&#xA;Metadata:&#xA;  filename        : calibration.json&#xA;  mimetype        : application/octet-stream&#xA;  K4A_CALIBRATION_FILE: calibration.json&#xA;

    &#xA;

    I am using the below command to extract Stream #0:0, Stream #0:1, and Stream #0:2 by changing map 0:X.

    &#xA;

    ffmpeg -i output_master.mkv -c copy -allow_raw_vfw 1 -map 0:0 temp_0.mkv &#xA;

    &#xA;

    To extract the configuration file from the video and store them in calibration.json, I am using the command below :

    &#xA;

    ffmpeg -dump_attachment:3 calibration.json -i output_master.mkv&#xA;

    &#xA;

    To combine the streams with configuration file using FFmpeg, I am using the command below :

    &#xA;

    ffmpeg -i temp_0.mkv -i temp_1.mkv -i temp_2.mkv -c copy -map 0:0 -map 1:0 -map 2:0 -allow_raw_vfw 1 -attach calibration.json -metadata:s:3 mimetype=application/octet-stream out.mkv&#xA;

    &#xA;

    Reconstrcuted Stream Info :

    &#xA;

      Could not find codec parameters for stream 3 (Attachment: none): unknown codec&#xA;  Consider increasing the value for the &#x27;analyzeduration&#x27; (0) and &#x27;probesize&#x27; (5000000) options&#xA;&#xA;  Input #0, matroska,webm, from &#x27;.\out.mkv&#x27;:&#xA;  Metadata:&#xA;  title           : Azure Kinect&#xA;  K4A_COLOR_FIRMWARE_VERSION: 1.6.110&#xA;  K4A_DEPTH_FIRMWARE_VERSION: 1.6.79&#xA;  K4A_DEVICE_SERIAL_NUMBER: 000123102712&#xA;  K4A_START_OFFSET_NS: 298800000&#xA;  K4A_DEPTH_DELAY_NS: 0&#xA;  K4A_WIRED_SYNC_MODE: MASTER&#xA;  ENCODER         : Lavf60.3.100&#xA;  Duration: 00:00:40.06, start: 0.000000, bitrate: 480559 kb/s&#xA;&#xA;  Stream #0:0(eng): Video: mjpeg (Baseline), yuvj422p(pc, bt470bg/unknown/unknown), 2048x1536, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn (default)&#xA;  Metadata:&#xA;  title           : COLOR&#xA;  K4A_COLOR_TRACK : 14499183330009048&#xA;  K4A_COLOR_MODE  : MJPG_1536P&#xA;  DURATION        : 00:00:40.029000000&#xA;&#xA;  Stream #0:1(eng): Video: rawvideo, rgb555le, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1k tbn (default)&#xA;  Metadata:&#xA;  title           : DEPTH&#xA;  K4A_DEPTH_TRACK : 429408169412322196&#xA;  K4A_DEPTH_MODE  : NFOV_UNBINNED&#xA;  DURATION        : 00:00:40.062000000&#xA;&#xA;  Stream #0:2(eng): Video: rawvideo, rgb555le, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1k tbn (default)&#xA;  Metadata:&#xA;  title           : IR&#xA;  K4A_IR_TRACK    : 194324406376800992&#xA;  K4A_IR_MODE     : ACTIVE&#xA;  DURATION        : 00:00:40.062000000&#xA;&#xA;  Stream #0:3: Attachment: none&#xA;  Metadata:&#xA;  filename        : calibration.json&#xA;  mimetype        : application/octet-stream&#xA;

    &#xA;

    However, I can not play the video on Azure Kinect Viewer, It displays failed to open recording.

    &#xA;

    Any help would be appreciated.

    &#xA;

  • Video files recorded in Google Chrome have stuttering audio

    4 juin 2018, par maxpaj

    Background

    I’m developing a platform where users can record videos of themselves or their screen and send them as video messages to customers / clients.

    I have limited users to only using my application with Google Chrome and I’m using the MediaRecorder API to record the video data from the users screen or webcamera. The codecs that are used for recording are VP8/OPUS (WEBM container).

    I need the videos to run in as many browsers as possible, so I’m using a 3rd party service to transcode videos from whatever format I’m getting from the users to a H.265/AAC MP4 container (caniuse MPEG-4/H.264).

    Issue

    Lately I’ve seen that some videos recorded on Mac OSX machines have the video and audio out of sync or that the video and audio stutters, depending on which player I’m using. I call these video files corrupt, for lack of a better word. Playing a corrupt file in Google Chrome renders smooth playing audio. Playing the video in VLC on my Windows machine renders stuttering audio.

    When I run the corrupt video files through the transcoding service I get video files with stuttering audio, no matter which player I’m using.

    This is an unwanted result and pretty much unacceptable since the audio needs to be smooth in order for the recipient of a video to not be bothered with the quality.

    Debugging

    According to the transcoding service support, this happens because of their mechanisms that try to sync up the audio and video from the corrupt file :

    Inspecting our encoding logs, I’ve noticed the following kind of
    warnings :

    [2018-05-16 14:08:38.009] [pcm_s16le @ 0x1d608c0] pcm_encode_frame :
    filling in for 5856 missing samples (122 ms) before pts 40800 to
    correct sync ! [2018-05-16 14:08:38.009] [pcm_s16le @ 0x1d608c0]
    pcm_encode_frame : dropping 2880 samples (60 ms) at pts 43392 to help
    correct sync to -3168 samples (-66 ms) !

    The problem here comes from the way that the audio in the original
    source file is encoded.

    -

    you should ensure that the audio is not out of sync (audio timestamps
    are correct) in your source file before submitting the job

    Running a corrupt file through ffmpeg on my own machine, re-encoding with the same codecs, produces the same kind of stuttering video. The logs produce an alarming amount of errors. Here is a sample of the log output :

    [libopus @ 0000029938e24d80] Queue input is backward in timeitrate= 194.8kbits/s dup=0 drop=5 speed=0.31x
    [webm @ 0000029938e09b00] Non-monotonous DTS in output stream 0:1; previous: 15434, current: 15394; changing to 15434. This may result in incorrect timestamps in the output file.
    [webm @ 0000029938e09b00] Non-monotonous DTS in output stream 0:1; previous: 15434, current: 15414; changing to 15434. This may result in incorrect timestamps in the output file.
    [libopus @ 0000029938e24d80] Queue input is backward in timeitrate= 193.3kbits/s dup=0 drop=5 speed=0.309x
    [webm @ 0000029938e09b00] Non-monotonous DTS in output stream 0:1; previous: 15539, current: 15499; changing to 15539. This may result in incorrect timestamps in the output file.
    [webm @ 0000029938e09b00] Non-monotonous DTS in output stream 0:1; previous: 15539, current: 15519; changing to 15539. This may result in incorrect timestamps in the output file.
    [libopus @ 0000029938e24d80] Queue input is backward in timeitrate= 192.0kbits/s dup=0 drop=5 speed=0.308x
    [webm @ 0000029938e09b00] Non-monotonous DTS in output stream 0:1; previous: 15667, current: 15627; changing to 15667. This may result in incorrect timestamps in the output file.
    [webm @ 0000029938e09b00] Non-monotonous DTS in output stream 0:1; previous: 15667, current: 15647; changing to 15667. This may result in incorrect timestamps in the output file.
    [libopus @ 0000029938e24d80] Queue input is backward in time

    I tried running the same inputs through another transcoding service and those outputs worked a lot better - video was still stuttering but the audio played smoothly, which is more important to the use case of my application.

    To my knowledge, this have so far only occurred for users on Mac OSX machines.

    Questions

    1. Is there anything I can do to have the files work better ? Or is this entirely a consequence of how encoding of video and audio in Google Chrome works ?

    2. One step in the right direction would be to just be able to detect when the video is corrupt. How can I do that ?