Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (37)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (7657)

  • Rails 5 with Carrierwave and S3 - creating multiple video formats for DASH streaming works but mpd file breaks

    22 novembre 2019, par Milind

    what I am doing -
    i have a Rails 5 app for video streaming(DASH MPEG) that uses FFMPEG to get encoded stream videos by converting any single video into multiple videos of multiple bit rates/size and primarily also MPD FILE that can be played easily on html video player, which i have already tested by manually running the ffmpeg scripts on the console that generates all the files.However, I want to automate this process and hence carrierwave comes into the pictures.
    Here, i use carrierwave to generate different versions(size/bitrate) of videos(mp4/webm) to upload to s3 but during running the version, where all the versions are successfully created in tmp folder, only the last version(mpd) that needs to create .mpd file, carrierwave creates a mp4 video file and just replaces the extension instead of actually creating the mpd file.

    So in the aws s3(screenshot added below), i can see my all versions and mpd file , but that mpd file which must be xml file is actually a mp4 video file or uploaded version file itself.
    I have also tried to create new file during the process, but it never works.
    Has some one encountered this problem ?

    any help will be greatly appreciated ?

    Ny code snippets below - model,uploader,output of script on the console during upload, s3 screenshot

      ##### models/video.rb ##########

       mount_uploader :video, VideoUploader  

      ####### uploaders/video_uploader.rb #########

       class VideoUploader < CarrierWave::Uploader::Base


       include CarrierWave::MiniMagick
       include CarrierWave::Video
       include CarrierWave::Video::Thumbnailer
       include ::CarrierWave::Backgrounder::Delay

       ####### for streaming ..first get the audio and then convert the input video into multiple bitrates/scale #######

       ###first get audio and then get all different versions of same video
       version :video_audio do
         process :get_audio

         def get_audio
           `ffmpeg -y -i "#{file.path}" -c:a aac -ac 2 -ab 128k -vn video_audio.mp4`
         end

           def full_filename(for_file)
             "video_audio.mp4"
           end

           def filename
             "video_audio.mp4"
           end        
       end

       ####### similar to the above i have various version like ...#########

       version :video_1080 do...end
       version :video_720 do... end
       version :video_480 do ...end
       ...and so on..and all these versions are successfully created and uploaded to s3, however..in next version ...show it also creates a video file whereas i need a simple mpd file ONLY.

            ###this is where even after everything works, in S3, i can see a video file of version mpd and not actual mpd file
            version :mpd  do
              process :get_manifest
                 ###here in the command below, the video.mpd file is successfully obtained but its uploaded as video.mpd file of added/uploaded video file and not a new mpd file
                 ###tried with ffmpeg -f webm_dash_manifest -i too, but s3 still shows a mp4 file
                 `MP4Box -dash 1000 -rap -frag-rap -profile onDemand -out video.mpd video_1080.mp4 video_720.mp4 video_480.mp4 video_360.mp4 video_240.mp4 video_audio.mp4 `

              end
             end

            ######### sidekiq console output - successful mpd is generated ################
                 DASH-ing files - single segment
                 Subsegment duration 1.000 - Fragment duration: 1.000 secs
                 Splitting segments and fragments at GOP boundaries
                 DASHing file video_1080.mp4
                 DASHing file video_720.mp4                                  
                 DASHing file video_480.mp4                                  
                 DASHing file video_360.mp4                                  
                 DASHing file video_240.mp4                                  
                 DASHing file video_audio.mp4                                
                \[DASH\] Generating MPD at time 2019-11-22T00:01:59.872Z      
                 mpd_1mb.mp4
                 mpd_video.mpd

    this is what the uploaded files looks on s3, notice the video.mpd, its a mp4 video file just like others which should have been a simple mpd file of not more than 2kb.

    Is there something that I am missing ?
    Can Carrierwave do this or is it not made for this ?
    Do I have to write a callback and then programmatically upload files to s3, if carrierwave is not helping in this regard ?

    Kindly provide any suggestion or useful advice so that I can move ahead.

    aws s3 list

  • avcodec/v410dec : add support for frame and slice threading

    25 novembre 2019, par Limin Wang
    avcodec/v410dec : add support for frame and slice threading
    

    1, Test server configure :
    [root@localhost ]# cat /proc/cpuinfo |grep "model name"
    model name  : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
    model name  : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
    ...

    [root@localhost ]# free -h
    total used free shared buff/cache available
    Mem : 102G 1.1G 100G 16M 657M 100G
    Swap : 4.0G 0B 4.0G

    2, Test result :
    encode the v410 input data for testing :
    ./ffmpeg -y -i 4k_422.ts -c:v v410 -vframes 10 test.avi

    master :
    ./ffmpeg -y -stream_loop 1000 -i ./test.avi -benchmark -f null -
    frame=10010 fps= 37 q=-0.0 Lsize=N/A time=00:38:26.30 bitrate=N/A speed= 8.6x
    video:5240kB audio:432432kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : unknown
    bench : utime=166.016s stime=102.192s rtime=268.120s
    bench : maxrss=273400kB

    patch applied :
    ./ffmpeg -y -threads 2 -thread_type slice -stream_loop 1000 -i ./test.avi -benchmark -f null -
    frame=10010 fps= 53 q=-0.0 Lsize=N/A time=00:38:26.30 bitrate=N/A speed=12.3x
    video:5240kB audio:432432kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : unknown
    bench : utime=165.135s stime=100.456s rtime=187.994s
    bench : maxrss=275476kB

    ./ffmpeg -y -threads 2 -thread_type frame -stream_loop 1000 -i ./test.avi -benchmark -f null -
    frame=10010 fps= 61 q=-0.0 Lsize=N/A time=00:38:26.30 bitrate=N/A speed=14.1x
    video:5240kB audio:432432kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : unknown
    bench : utime=171.386s stime=122.102s rtime=163.637s
    bench : maxrss=340308kB

    Signed-off-by : Limin Wang <lance.lmwang@gmail.com>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/v410dec.c
  • avfilter/vf_dnn_processing : add a generic filter for image proccessing with dnn networks

    31 octobre 2019, par Guo, Yejun
    avfilter/vf_dnn_processing : add a generic filter for image proccessing with dnn networks
    

    This filter accepts all the dnn networks which do image processing.
    Currently, frame with formats rgb24 and bgr24 are supported. Other
    formats such as gray and YUV will be supported next. The dnn network
    can accept data in float32 or uint8 format. And the dnn network can
    change frame size.

    The following is a python script to halve the value of the first
    channel of the pixel. It demos how to setup and execute dnn model
    with python+tensorflow. It also generates .pb file which will be
    used by ffmpeg.

    import tensorflow as tf
    import numpy as np
    import imageio
    in_img = imageio.imread('in.bmp')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]
    filter_data = np.array([0.5, 0, 0, 0, 1., 0, 0, 0, 1.]).reshape(1,1,3,3).astype(np.float32)
    filter = tf.Variable(filter_data)
    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
    sess=tf.Session()
    sess.run(tf.global_variables_initializer())
    output = sess.run(y, feed_dict=x : in_data)
    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'halve_first_channel.pb', as_text=False)
    output = output * 255.0
    output = output.astype(np.uint8)
    imageio.imsave("out.bmp", np.squeeze(output))

    To do the same thing with ffmpeg :
    - generate halve_first_channel.pb with the above script
    - generate halve_first_channel.model with tools/python/convert.py
    - try with following commands
    ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=native -y out.native.png
    ./ffmpeg -i input.jpg -vf dnn_processing=model=halve_first_channel.pb:input=dnn_in:output=dnn_out:fmt=rgb24:dnn_backend=tensorflow -y out.tf.png

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] configure
    • [DH] doc/filters.texi
    • [DH] libavfilter/Makefile
    • [DH] libavfilter/allfilters.c
    • [DH] libavfilter/vf_dnn_processing.c