Recherche avancée

Médias (91)

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (9654)

  • ffmpeg - seamless crossfade loop for the part of video

    14 janvier 2021, par Flamin GO

    I need to apply crossfade to the last X frames of a video with the first X frames in order to obtain a seamless loop, but making this for the necessary part of video.

    


    Here's the answer for looping the entire video.

    


    Currently what I have :
(Whole video duration = 25. Cutted (result) part = 15 sec (from 5 to 20 sec pos). Transition = 1 sec.)

    


    ffmpeg -i input.mp4 -ss 5 -to 20 -filter_complex
    "[0]split[body][pre];
     [pre]trim=duration=1,format=yuva420p,fade=d=1:alpha=1,setpts=PTS+( (15+(5-1)) /TB)[jt];
     [body]trim=1,setpts=PTS-STARTPTS[main];
     [main][jt]overlay"  -c:v libx264 -preset veryslow -b:v 2500K output.mp4
 


    


    In this case, everything works, but at the end of the resulting video, a piece from the original video is superimposed, which starts from 0 to 1 second, and not from 4 to 5 seconds of the original video, as it should be.

    


    I read the official ffmpeg documentation, tried some actions on "start/end" parameters for "trim/fade" with changing of "setpts", but I always got just another batch of bugs.

    


  • Why the output of the ffmpeg-python doesn't match the image shape ?

    9 novembre 2019, par Swi Jason

    I used the ffmpeg-python module to convert video to images. Specifically, I used the code provided by the official git repo of ffmpeg-python, as below

    out, _ = (
       ffmpeg
       .input(in_filename)
       .filter('select', 'gte(n,{})'.format(frame_num))
       .output('pipe:', vframes=1, format='image2', vcodec='mjpeg')
       .run(capture_stdout=True)
    )
    im = np.frombuffer(out, 'uint8')
    print(im.shape[0]/3/1080)
    # 924.907098765432

    The original video is of size (1920, 1080) and pix_fmt ’yuv420p’, but the outputs of the above code is not 1920.

    I have figured out by myself that the output of ffmpeg.run() is not a decoded image array, but a byte string encoded by JPEG format. To restore the image into a numpy array, simply use the cv2.imdecode() function. For example,

    im = cv2.imdecode(im, cv2.IMREAD_COLOR)

    However, I can’t use opencv on my embeded Linux system. So my question now is that, can I get numpy output from ffmpeg-python directly, without the need of converting it by opencv ?

  • libavfi/dnn : add LibTorch as one of DNN backend

    15 mars 2024, par Wenbin Chen
    libavfi/dnn : add LibTorch as one of DNN backend
    

    PyTorch is an open source machine learning framework that accelerates
    the path from research prototyping to production deployment. Official
    website : https://pytorch.org/. We call the C++ library of PyTorch as
    LibTorch, the same below.

    To build FFmpeg with LibTorch, please take following steps as
    reference :
    1. download LibTorch C++ library in
    https://pytorch.org/get-started/locally/,
    please select C++/Java for language, and other options as your need.
    Please download cxx11 ABI version :
    (libtorch-cxx11-abi-shared-with-deps-*.zip).
    2. unzip the file to your own dir, with command
    unzip libtorch-shared-with-deps-latest.zip -d your_dir
    3. export libtorch_root/libtorch/include and
    libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
    export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
    4. config FFmpeg with ../configure —enable-libtorch \
    —extra-cflag=-I/libtorch_root/libtorch/include \
    —extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
    —extra-ldflags=-L/libtorch_root/libtorch/lib/
    5. make

    To run FFmpeg DNN inference with LibTorch backend :
    ./ffmpeg -i input.jpg -vf \
    dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg

    The LibTorch_model.pt can be generated by Python with torch.jit.script()
    api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
    pytorch official guide about how to convert and load torchscript model.
    Please note, torch.jit.trace() is not recommanded, since it does
    not support ambiguous input size.

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Wenbin Chen <wenbin.chen@intel.com>
    Reviewed-by : Guo Yejun <yejun.guo@intel.com>

    • [DH] configure
    • [DH] libavfilter/dnn/Makefile
    • [DH] libavfilter/dnn/dnn_backend_torch.cpp
    • [DH] libavfilter/dnn/dnn_interface.c
    • [DH] libavfilter/dnn_filter_common.c
    • [DH] libavfilter/dnn_interface.h
    • [DH] libavfilter/vf_dnn_processing.c