Recherche avancée

Médias (91)

Autres articles (11)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

Sur d’autres sites (3241)

  • Concatenate chunk containg headers to another chunk in h264

    18 novembre 2014, par Ortixx

    I’m trying to extract thumbnails from a torrent stream by downloading the first couple of chunks to get the headers, another set of chunks from the middle and then concat them to have a single video file.

    For this I’m using nodejs but I’m having trouble with the concatenation part. Obviously the headers include the length of the video so if I simply concat another chunk to the end of the headers chunk, it won’t work.

    In other words, I have 2 chunks of a video file : The first one contains the headers and some material and the other one is fully composed of a video stream. I want to combine the two to form a single video file
    So my question is how can I make this work properly if at all ?

  • FFMpeg on Android, undefined references to libavcodec functions, although it is listed on command line

    24 mars 2019, par dimsuz

    I have a problem with unresolved references to ffmpeg’s libavcodec functions, so far failed to find the answer in other places (including my mind) :)

    Let me describe my setup - it takes space, but is really basic, it might be that I’m failing to see some error...

    I built an FFMPeg with ndk r5 toolchain, ffmpeg port I got from http://bambuser.com/opensource (as recommended in other questions here). It built fine, so I put several static libraries in my project like this :

    <project>/jni/bambuser_ffmpeg/libavcodec.a
    <project>/jni/bambuser_ffmpeg/libavformat.a
    <project>/jni/bambuser_ffmpeg/libavcore.a
    <project>/jni/bambuser_ffmpeg/libavutil.a
    </project></project></project></project>

    Next, I created an Android.mk in bambuser_ffmpeg folder to list these libs as a prebuilt ones :

    LOCAL_PATH := $(call my-dir)

    include $(CLEAR_VARS)
    LOCAL_MODULE := bambuser-libavcore
    LOCAL_SRC_FILES := libavcore.a
    include $(PREBUILT_STATIC_LIBRARY)

    include $(CLEAR_VARS)
    LOCAL_MODULE := bambuser-libavformat
    LOCAL_SRC_FILES := libavformat.a
    include $(PREBUILT_STATIC_LIBRARY)

    (same for other two libs)

    Next, I have another module which references these libs in its Android.mk, sets up include paths, etc :

    LOCAL_PATH := $(call my-dir)

    include $(CLEAR_VARS)

    LOCAL_MODULE := ffmpegtest
    LOCAL_STATIC_LIBRARIES := bambuser-libavcodec bambuser-libavcore bambuser-libavformat bambuser-libavutil
    LOCAL_SRC_FILES := ffmpeg_test.cpp
    LOCAL_C_INCLUDES := $(LOCAL_PATH)/../bambuser_ffmpeg/include
    LOCAL_LDLIBS    := -llog -lz

    include $(BUILD_SHARED_LIBRARY)

    And finally I have my ffmpeg_test.cpp which is really basic, like this :

    #include

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    }

    extern "C" {
       JNIEXPORT jint JNICALL Java_com_the7art_ffmpegtest_PaintThread_testFFMpeg(JNIEnv* env, jobject obj, jstring fileName);
    }

    JNIEXPORT jint JNICALL Java_com_the7art_ffmpegtest_PaintThread_testFFMpeg(JNIEnv* env, jobject obj, jstring fileName)
    {
       av_register_all();
       return 0;
    }

    When I run ndk-build, it compiles fine, but when linking it prints an unresolved reference to almost every function in libavcodec. Looks like only this lib’s functions are failing to be located :

    /home/dimka/src/mobile/android/ffmpegtest/obj/local/armeabi/libavformat.a(allformats.o): In function `av_register_all':
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/allformats.c:47: undefined reference to `avcodec_register_all'
    /home/dimka/src/mobile/android/ffmpegtest/obj/local/armeabi/libavformat.a(utils.o): In function `parse_frame_rate':
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/utils.c:3240: undefined reference to `av_parse_video_rate'
    /home/dimka/src/mobile/android/ffmpegtest/obj/local/armeabi/libavformat.a(utils.o): In function `parse_image_size':
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/utils.c:3234: undefined reference to `av_parse_video_size'
    /home/dimka/src/mobile/android/ffmpegtest/obj/local/armeabi/libavformat.a(utils.o): In function `flush_packet_queue':
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/utils.c:1277: undefined reference to `av_free_packet'
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/utils.c:1283: undefined reference to `av_free_packet'
    /home/dimka/src/mobile/android/ffmpegtest/obj/local/armeabi/libavformat.a(utils.o): In function `get_audio_frame_size':
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/utils.c:766: undefined reference to `av_get_bits_per_sample'
    /home/dimka/src/mobile/android/ffmpegtest/obj/local/armeabi/libavformat.a(utils.o): In function `ff_interleave_add_packet':
    /home/dimka/work/suzy/tmp/ffmpeg-android/ffmpeg/libavformat/utils.c:2909: undefined reference to `av_dup_packet'
    and so on...

    I fail to figure why this is happening. I tried running ndk-build V=1 to check the actual linking command, and libavcodec is sitting there perfectly right, like it should. All other ffmpeg libs are there too.

    Any hints ?

  • Why JPEG compressing an uncompressed image differs its original (FFmpeg, NvJPEG, ...)

    22 juin 2021, par Fruchtzwerg

    I am currently struggling to understand why recompressing an uncompressed JPEG image differs its original.

    &#xA;

    It's clear, that JPEG is a lossy compression, but what if the image to compress is already uncompressed, which means all sampling losses are already included ? In other words : Downsampling and DCT should be inversable at this point without loosing data.

    &#xA;

    JPEG algorithm

    &#xA;

    To make sure losses are not effected by the color space conversion, this step is skipped and YUV images are used.

    &#xA;

      &#xA;
    1. Compress YUV image to JPEG (image.yuv —> image.yuv.jpg)
    2. &#xA;

    3. Uncompress JPEG image to YUV image (image.yuv.jpg —> image.yuv.jpg.yuv)
    4. &#xA;

    5. Compress YUV image to JPEG (image.yuv.jpg.yuv —> image.yuv.jpg.yuv.jpg)
    6. &#xA;

    7. Uncompress JPEG image to YUV image (image.yuv.jpg.yuv.jpg —> image.yuv.jpg.yuv.jpg.yuv)
    8. &#xA;

    &#xA;

    Step 1 includes a lossy compression, so we will not deal with this step anymore. For me, intresting is what happens afterwards :

    &#xA;

    Uncompressing the JPEG image back to YUV (step 2) leads to an image which perfectly fits all sampling steps if compressed again (step 3). So the JPEG image after step 3 should (from my understanding) be exactly the same as after step 1. Also the YUV images after step 4 and step 2 should equal each other.

    &#xA;

    Looking at the steps for one 8x8 block the following simplified sequence should illustrate what I am trying to descibe. Lets start with the original YUV image, which can only be decompressed loosing all decimal places :

    &#xA;

    [ 1.123, 2.345, 3.456, ... ]    (YUV)&#xA;    DTC &#x2B; Quantization&#xA;[ -26, -3, -6, ... ]            (Quantized frequency space)&#xA;    Inverse DTC &#x2B; Quantization&#xA;[ 1, 2, 3, ... ]                (YUV)&#xA;

    &#xA;

    Doing this with input, which already matches all steps, which may lead to loss of data afterwards (using round numbers in my example), the decompressed image should match its original :

    &#xA;

    [ 1, 2, 3, ... ]                (YUV)&#xA;    DTC &#x2B; Quantization&#xA;[ -26, -3, -6, ... ]            (Quantized frequency space)&#xA;    Inverse DTC &#x2B; Quantization&#xA;[ 1, 2, 3, ... ]                (YUV)&#xA;

    &#xA;

    There are also some sources and discussions, which are confirming my idea :

    &#xA;

    &#xA;

    So much for theory. In praxis, I've runned these steps using ffmpeg and Nvidias jpeg samples (using NvJPEGEncoder).

    &#xA;

    ffmpeg :

    &#xA;

    #Create YUV image&#xA;ffmpeg -y -i image.jpg -s 1920x1080 -pix_fmt yuv420p image.yuv&#xA;#YUV to JPEG&#xA;ffmpeg -y -s 1920x1080 -pix_fmt yuv420p -i image.yuv image.yuv.jpg&#xA;#JPEG TO YUV&#xA;ffmpeg -y -i image.yuv.jpg -s 1920x1080 -pix_fmt yuv420p image.yuv.jpg.yuv&#xA;#YUV to JPEG&#xA;ffmpeg -y -s 1920x1080 -pix_fmt yuv420p -i image.yuv.jpg.yuv image.yuv.jpg.yuv.jpg&#xA;#JPEG TO YUV&#xA;ffmpeg -y -i image.yuv.jpg.yuv.jpg -s 1920x1080 -pix_fmt yuv420p image.yuv.jpg.yuv.jpg.yuv&#xA;#YUV to JPEG&#xA;ffmpeg -y -s 1920x1080 -pix_fmt yuv420p -i image.yuv.jpg.yuv.jpg.yuv image.yuv.jpg.yuv.jpg.yuv.jpg&#xA;

    &#xA;

    Nvidia :

    &#xA;

    #Create YUV image&#xA;./jpeg_decode num_files 1 image.jpg image.yuv&#xA;#YUV to JPEG&#xA;./jpeg_encode image.yuv 1920 1080 image.yuv.jpg&#xA;#JPEG TO YUV&#xA;./jpeg_decode num_files 1 image.yuv.jpg image.yuv.jpg.yuv&#xA;#YUV to JPEG&#xA;./jpeg_encode image.yuv.jpg.yuv 1920 1080 image.yuv.jpg.yuv.jpg&#xA;#JPEG TO YUV&#xA;./jpeg_decode num_files 1 image.yuv.jpg.yuv.jpg image.yuv.jpg.yuv.jpg.yuv&#xA;#YUV to JPEG&#xA;./jpeg_encode image.yuv.jpg.yuv.jpg.yuv 1920 1080 image.yuv.jpg.yuv.jpg.yuv.jpg&#xA;

    &#xA;

    But a comparison of the images

    &#xA;

      &#xA;
    • image.yuv.jpg.yuv and image.yuv.jpg.yuv.jpg.yuv
    • &#xA;

    • image.yuv.jpg.yuv.jpg and image.yuv.jpg.yuv.jpg.yuv.jpg
    • &#xA;

    &#xA;

    showing differences in the files. That brings me to my question why and where the difference gets happen, since from my understanding the files should be equal.

    &#xA;