Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (70)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (6320)

  • vf_dnn_processing : add support for more formats gray8 and grayf32

    27 décembre 2019, par Guo, Yejun
    vf_dnn_processing : add support for more formats gray8 and grayf32
    

    The following is a python script to halve the value of the gray
    image. It demos how to setup and execute dnn model with python+tensorflow.
    It also generates .pb file which will be used by ffmpeg.

    import tensorflow as tf
    import numpy as np
    from skimage import color
    from skimage import io
    in_img = io.imread('input.jpg')
    in_img = color.rgb2gray(in_img)
    io.imsave('ori_gray.jpg', np.squeeze(in_img))
    in_data = np.expand_dims(in_img, axis=0)
    in_data = np.expand_dims(in_data, axis=3)
    filter_data = np.array([0.5]).reshape(1,1,1,1).astype(np.float32)
    filter = tf.Variable(filter_data)
    x = tf.placeholder(tf.float32, shape=[1, None, None, 1], name='dnn_in')
    y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
    sess=tf.Session()
    sess.run(tf.global_variables_initializer())
    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'halve_gray_float.pb', as_text=False)
    print("halve_gray_float.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate halve_gray_float.model\n")
    output = sess.run(y, feed_dict=x : in_data)
    output = output * 255.0
    output = output.astype(np.uint8)
    io.imsave("out.jpg", np.squeeze(output))

    To do the same thing with ffmpeg :
    - generate halve_gray_float.pb with the above script
    - generate halve_gray_float.model with tools/python/convert.py
    - try with following commands
    ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
    ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow out.tf.png

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c
  • ffmpeg stuck in generating thumbnail of video [closed]

    10 décembre 2019, par user6121419

    I’m trying to create a thumbnail of a .mov video with ffmpeg, but it gets stuck.
    I tried the same command on 2 different machines and with different types of arguments, but nothing changed the result. The video itself can be viewed without a problem, so it shouldn’t be corrupted.
    The video was taken on an iphone at 4k 60fps.

    What I’ve tried :

    ffmpeg -i IMG_1001.MOV -ss 00:00:02 -vframes 1 thumbnail.jpg

    It gets stuck at the third last line frame= 0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed=   0x and from then on, I stopped the process with ctrl+c

    Output :

    ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
     configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libavresample   3.  7.  0 /  3.  7.  0
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'IMG_1001.MOV':
     Metadata:
       major_brand     : qt
       minor_version   : 0
       compatible_brands: qt
       creation_time   : 2019-11-xx
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 8
       com.apple.quicktime.software: 13.2.2
       com.apple.quicktime.creationdate: 2019-11-xx
     Duration: 00:00:05.18, start: 0.000000, bitrate: 54961 kb/s
       Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, bt709), 3840x2160, 54851 kb/s, 60 fps, 60 tbr, 600 tbn, 600 tbc (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
         encoder         : HEVC
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
       Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
       Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
    Stream mapping:
     Stream #0:0 -> #0:0 (hevc (native) -> mjpeg (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x55d83a288940] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'thumbnail.jpg':
     Metadata:
       major_brand     : qt
       minor_version   : 0
       compatible_brands: qt
       com.apple.quicktime.creationdate: 2019-11-xx
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 8
       com.apple.quicktime.software: 13.2.2
       encoder         : Lavf57.83.100
       Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 3840x2160, q=2-31, 200 kb/s, 60 fps, 60 tbn, 60 tbc (default)
       Metadata:
         creation_time   : 2019-11-xx
         handler_name    : Core Media Data Handler
         encoder         : Lavc57.107.100 mjpeg
       Side data:
         cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
    frame=    0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed=   0x
    video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)

    Any idea on what it could be ? Am I missing something or could it be that the encoding can’t be read properly by ffmpeg ? Besides that I haven’t found any alternative to generate thumbnails from videos on linux

  • Rails 5 with Carrierwave and S3 - creating multiple video formats for DASH streaming works but mpd file breaks

    22 novembre 2019, par Milind

    what I am doing -
    i have a Rails 5 app for video streaming(DASH MPEG) that uses FFMPEG to get encoded stream videos by converting any single video into multiple videos of multiple bit rates/size and primarily also MPD FILE that can be played easily on html video player, which i have already tested by manually running the ffmpeg scripts on the console that generates all the files.However, I want to automate this process and hence carrierwave comes into the pictures.
    Here, i use carrierwave to generate different versions(size/bitrate) of videos(mp4/webm) to upload to s3 but during running the version, where all the versions are successfully created in tmp folder, only the last version(mpd) that needs to create .mpd file, carrierwave creates a mp4 video file and just replaces the extension instead of actually creating the mpd file.

    So in the aws s3(screenshot added below), i can see my all versions and mpd file , but that mpd file which must be xml file is actually a mp4 video file or uploaded version file itself.
    I have also tried to create new file during the process, but it never works.
    Has some one encountered this problem ?

    any help will be greatly appreciated ?

    Ny code snippets below - model,uploader,output of script on the console during upload, s3 screenshot

      ##### models/video.rb ##########

       mount_uploader :video, VideoUploader  

      ####### uploaders/video_uploader.rb #########

       class VideoUploader &lt; CarrierWave::Uploader::Base


       include CarrierWave::MiniMagick
       include CarrierWave::Video
       include CarrierWave::Video::Thumbnailer
       include ::CarrierWave::Backgrounder::Delay

       ####### for streaming ..first get the audio and then convert the input video into multiple bitrates/scale #######

       ###first get audio and then get all different versions of same video
       version :video_audio do
         process :get_audio

         def get_audio
           `ffmpeg -y -i "#{file.path}" -c:a aac -ac 2 -ab 128k -vn video_audio.mp4`
         end

           def full_filename(for_file)
             "video_audio.mp4"
           end

           def filename
             "video_audio.mp4"
           end        
       end

       ####### similar to the above i have various version like ...#########

       version :video_1080 do...end
       version :video_720 do... end
       version :video_480 do ...end
       ...and so on..and all these versions are successfully created and uploaded to s3, however..in next version ...show it also creates a video file whereas i need a simple mpd file ONLY.

            ###this is where even after everything works, in S3, i can see a video file of version mpd and not actual mpd file
            version :mpd  do
              process :get_manifest
                 ###here in the command below, the video.mpd file is successfully obtained but its uploaded as video.mpd file of added/uploaded video file and not a new mpd file
                 ###tried with ffmpeg -f webm_dash_manifest -i too, but s3 still shows a mp4 file
                 `MP4Box -dash 1000 -rap -frag-rap -profile onDemand -out video.mpd video_1080.mp4 video_720.mp4 video_480.mp4 video_360.mp4 video_240.mp4 video_audio.mp4 `

              end
             end

            ######### sidekiq console output - successful mpd is generated ################
                 DASH-ing files - single segment
                 Subsegment duration 1.000 - Fragment duration: 1.000 secs
                 Splitting segments and fragments at GOP boundaries
                 DASHing file video_1080.mp4
                 DASHing file video_720.mp4                                  
                 DASHing file video_480.mp4                                  
                 DASHing file video_360.mp4                                  
                 DASHing file video_240.mp4                                  
                 DASHing file video_audio.mp4                                
                \[DASH\] Generating MPD at time 2019-11-22T00:01:59.872Z      
                 mpd_1mb.mp4
                 mpd_video.mpd

    this is what the uploaded files looks on s3, notice the video.mpd, its a mp4 video file just like others which should have been a simple mpd file of not more than 2kb.

    Is there something that I am missing ?
    Can Carrierwave do this or is it not made for this ?
    Do I have to write a callback and then programmatically upload files to s3, if carrierwave is not helping in this regard ?

    Kindly provide any suggestion or useful advice so that I can move ahead.

    aws s3 list