Recherche avancée

Médias (91)

Autres articles (67)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (12435)

  • vf_dnn_processing : remove parameter 'fmt'

    27 décembre 2019, par Guo, Yejun
    vf_dnn_processing : remove parameter 'fmt'
    

    do not request AVFrame's format in vf_ddn_processing with 'fmt',
    but to add another filter for the format.

    command examples :
    ./ffmpeg -i input.jpg -vf format=bgr24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png
    ./ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c
  • vf_dnn_processing : add support for more formats gray8 and grayf32

    27 décembre 2019, par Guo, Yejun
    vf_dnn_processing : add support for more formats gray8 and grayf32
    

    The following is a python script to halve the value of the gray
    image. It demos how to setup and execute dnn model with python+tensorflow.
    It also generates .pb file which will be used by ffmpeg.

    import tensorflow as tf
    import numpy as np
    from skimage import color
    from skimage import io
    in_img = io.imread('input.jpg')
    in_img = color.rgb2gray(in_img)
    io.imsave('ori_gray.jpg', np.squeeze(in_img))
    in_data = np.expand_dims(in_img, axis=0)
    in_data = np.expand_dims(in_data, axis=3)
    filter_data = np.array([0.5]).reshape(1,1,1,1).astype(np.float32)
    filter = tf.Variable(filter_data)
    x = tf.placeholder(tf.float32, shape=[1, None, None, 1], name='dnn_in')
    y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
    sess=tf.Session()
    sess.run(tf.global_variables_initializer())
    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'halve_gray_float.pb', as_text=False)
    print("halve_gray_float.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate halve_gray_float.model\n")
    output = sess.run(y, feed_dict=x : in_data)
    output = output * 255.0
    output = output.astype(np.uint8)
    io.imsave("out.jpg", np.squeeze(output))

    To do the same thing with ffmpeg :
    - generate halve_gray_float.pb with the above script
    - generate halve_gray_float.model with tools/python/convert.py
    - try with following commands
    ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
    ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow out.tf.png

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c
  • ffmpeg stuck in generating thumbnail of video [closed]

    10 décembre 2019, par user6121419

    I’m trying to create a thumbnail of a .mov video with ffmpeg, but it gets stuck.
    I tried the same command on 2 different machines and with different types of arguments, but nothing changed the result. The video itself can be viewed without a problem, so it shouldn’t be corrupted.
    The video was taken on an iphone at 4k 60fps.

    What I’ve tried :

    ffmpeg -i IMG_1001.MOV -ss 00:00:02 -vframes 1 thumbnail.jpg

    It gets stuck at the third last line frame= 0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed=   0x and from then on, I stopped the process with ctrl+c

    Output :

    ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
     configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libavresample   3.  7.  0 /  3.  7.  0
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'IMG_1001.MOV':
     Metadata:
       major_brand     : qt
       minor_version   : 0
       compatible_brands: qt
       creation_time   : 2019-11-xx
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 8
       com.apple.quicktime.software: 13.2.2
       com.apple.quicktime.creationdate: 2019-11-xx
     Duration: 00:00:05.18, start: 0.000000, bitrate: 54961 kb/s
       Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, bt709), 3840x2160, 54851 kb/s, 60 fps, 60 tbr, 600 tbn, 600 tbc (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
         encoder         : HEVC
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
       Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
       Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
    Stream mapping:
     Stream #0:0 -> #0:0 (hevc (native) -> mjpeg (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x55d83a288940] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'thumbnail.jpg':
     Metadata:
       major_brand     : qt
       minor_version   : 0
       compatible_brands: qt
       com.apple.quicktime.creationdate: 2019-11-xx
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 8
       com.apple.quicktime.software: 13.2.2
       encoder         : Lavf57.83.100
       Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 3840x2160, q=2-31, 200 kb/s, 60 fps, 60 tbn, 60 tbc (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
         encoder         : Lavc57.107.100 mjpeg
       Side data:
         cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
    frame=    0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed=   0x
    video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)

    Any idea on what it could be ? Am I missing something or could it be that the encoding can’t be read properly by ffmpeg ? Besides that I haven’t found any alternative to generate thumbnails from videos on linux