Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (36)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (3498)

  • avcodec/h264_refs : Rewrite code to make control flow clearer

    26 mars 2024, par Andreas Rheinhardt
    avcodec/h264_refs : Rewrite code to make control flow clearer
    

    While this change IMO makes the control flow clearer
    for the human reader, it is especially important for
    GCC : It erroneously believes that it is possible to
    enter the SHORT2(UNUSED|LONG) cases without having
    entered the preceding block that initializes pic,
    frame_num, structure and j ; it would emit -Wmaybe-uninitialized
    warnings for these variables if they were not pseudo-
    initialized with av_uninit(). This patch allows to remove
    the latter.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/h264_refs.c
  • FFMPEG mixing .wav files volume control [closed]

    1er avril 2024, par JoonSeo Yang

    I am working on Java, using the ffmpeg library with command line processing to mix 2 .wav files into one .wav file, but I would like to know how to control the volume of each original file.

    &#xA;

    My problem : i have 2 audio files (human voice BGM), but BGM is too loud, so i can barely hear human voice, i would like to -> volume down the BGM file, or volume up voice file

    &#xA;

    Here is my partial code

    &#xA;

    String ffmpegCommand = file_exe.getAbsolutePath()&#xA;    &#x2B; String.format(&#xA;        " -i %s -i %s -filter_complex amix=inputs=2:duration=first:dropout_transition=3 %s",&#xA;        file_1.getAbsolutePath(), file_2.getAbsolutePath(), outputFilePath);&#xA;Process process = Runtime.getRuntime().exec(ffmpegCommand);&#xA;

    &#xA;

    it is quite simple code. I need help on this, and it would be very thankful to which document to read to get such information

    &#xA;

    PS.sorry for my poor eng.. Im not native :D

    &#xA;

    I was wandering around documentations, but its quite hard to find what i want

    &#xA;

  • Anomaly in raw I420 video generated by GStreamer

    16 avril 2024, par Lea

    Situation

    &#xA;

    I'm trying to convert RGBA image data to YUV420P in multiple threads, then send this data to a main thread which splits the data it receives from each thread into separate frames and combines them in order to a video. Currently, I'm using FFmpeg for this task but I've found GStreamer to do a quicker job at colorspace conversion than FFmpeg.

    &#xA;

    Problem

    &#xA;

    The raw video generated by GStreamer does not match the expectations for YUV 4:2:0 planar video data. To test this, I've made a raw RGBA test video of 3 red 4x4 (16 pixel) frames.

    &#xA;

    ffmpeg -f lavfi -i color=color=red -t 3 -r 1 -s 4x4 -f rawvideo -pix_fmt rgba ./input.rgba&#xA;

    &#xA;

    Example data

    &#xA;

    FFmpeg

    &#xA;

    Now, first trying to convert it via FFmpeg as I'm doing it currently :

    &#xA;

    ffmpeg -f rawvideo -pix_fmt rgba -s 4x4 -i input.rgba -f rawvideo -pix_fmt yuv420p ./ffmpeg.yuv420p&#xA;

    &#xA;

    This creates a 72 byte file => 1.5 bytes per pixel, 24 bytes per frame : As expected for yuv420p data.

    &#xA;

    $ hexdump -C ./ffmpeg.yuv420p &#xA;00000000  51 51 51 51 50 50 50 50  50 50 50 50 50 50 50 50  |QQQQPPPPPPPPPPPP|&#xA;00000010  5b 5b 5b 5b ee ee ee ee  51 51 51 51 50 50 50 50  |[[[[....QQQQPPPP|&#xA;00000020  50 50 50 50 50 50 50 50  5b 5b 5b 5b ee ee ee ee  |PPPPPPPP[[[[....|&#xA;00000030  51 51 51 51 50 50 50 50  50 50 50 50 50 50 50 50  |QQQQPPPPPPPPPPPP|&#xA;00000040  5b 5b 5b 5b ee ee ee ee                           |[[[[....|&#xA;

    &#xA;

    GStreamer

    &#xA;

    Now trying to do the same via GStreamer, with the I420 format which corresponds to yuv420p as per their documentation :

    &#xA;

    gst-launch-1.0 filesrc location=./input.rgba ! rawvideoparse format=rgba width=4 height=4 \&#xA;! videoconvert ! video/x-raw,format=I420 ! filesink location=./gstreamer.yuv420p&#xA;

    &#xA;

    This creates a 96 byte file => 2 bytes per pixel, 32 bytes per frame (?) : Unusual for yuv420p data. Additionally, none of the sections match the FFmpeg output, ruling out some kind of padding.

    &#xA;

    $ hexdump -C ./gstreamer.yuv420p &#xA;00000000  50 50 50 50 50 50 50 50  50 50 50 50 50 50 50 50  |PPPPPPPPPPPPPPPP|&#xA;00000010  5a 5a 00 00 5a 5a 00 00  ee ee 00 00 ed ed 00 00  |ZZ..ZZ..........|&#xA;00000020  50 50 50 50 50 50 50 50  50 50 50 50 50 50 50 50  |PPPPPPPPPPPPPPPP|&#xA;00000030  5a 5a 63 6b 5a 5a 77 62  ee ee 78 6d ed ed 2c 20  |ZZckZZwb..xm.., |&#xA;00000040  50 50 50 50 50 50 50 50  50 50 50 50 50 50 50 50  |PPPPPPPPPPPPPPPP|&#xA;00000050  5a 5a 00 00 5a 5a 00 00  ee ee 00 00 ed ed 00 00  |ZZ..ZZ..........|&#xA;

    &#xA;

    This output can also not be interpreted correctly as yuv420p by FFmpeg, leading to corrupted frames when trying to do so :

    &#xA;

    ffmpeg -f rawvideo -pix_fmt yuv420p -s 4x4 -i gstreamer.yuv420p -f image2 "./%d.png"&#xA;

    &#xA;

    Corrupted yuv420p frames generated by GStreamer

    &#xA;

    Solution ?

    &#xA;

    For my personal problem I need a way to chop up raw I420 video generated by GStreamer into separate frames to work with. However, I would also like to understand why GStreamer behaves this way and which key piece I'm missing here.

    &#xA;

    Additional notes

    &#xA;

    I've ruled out an issue with the input in GStreamer, as piping it to autovideosink leads to a normal result. I'm also aware of multifilesink, but I would like to avoid writing to disk and rather work with the data directly in buffers.

    &#xA;