Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (82)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (11393)

  • How create a circular video (transparent area on top of video) effect without applying image mask to video

    26 septembre 2024, par Arina Lubimova

    Basically i googled a lot and solutions suggests apply some PNG mask or do not provide needed solution.

    


    What i've found.

    


    ffmpeg -i main.mkv -i facecloseup.mkv
 -filter_complex "[1]trim=end_frame=1,
  geq='st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),pow(min(W/2,H/2),2)),255,0)':128:128,
  loop=-1:1,setpts=N/FRAME_RATE/TB[mask];
  [1][mask]alphamerge[cutout];
  [0][cutout]overlay=x=W-w:y=0[v];
  [0][1]amix=2[a]"
 -map "[v]" -map "[a]"  out.mp4


    


    command = "-i " + this.video1Path.getPath() + " -i " + this.video2Path.getPath() + " -filter_complex [1]trim=end_frame=1,geq=lum_expr='st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3)," + (this.mZoomLayout.getZoomedWidth()/2) + "*" + (this.mZoomLayout.getZoomedWidth()/2) + "),255,0)':128:128,format=gray,loop=-1:1,setpts=N/FRAME_RATE/TB[mask];[1][mask]alphamerge,format=rgba,lutrgb=a=if(gte(val\\,16)\\,val)[cutout];[0][cutout]overlay=" + this.mZoomLayout.getCircleX() + ":" + this.mZoomLayout.getCircleY() + ":enable='between(t,0," + this.videoTwoDuration + ") -c:v libx264 -crf 24 -preset ultrafast " + videoPath.getPath(); 


    


    So i tried to extract needed things from them, but i don't understand how exactly i need to do that, i did this :

    


    ffmpeg -i video.mp4 -filter_complex "[0]geq='st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),pow(min(W/2,H/2),2)),255,0)':H:W; [0:v][mask]alphamerge" out.mp4


    


    [mov,mp4,m4a,3gp,3g2,mj2 @ 000001f761dd8e40] Invalid stream specifier: mask.
    Last message repeated 1 times
Stream specifier 'mask' in filtergraph description [0]geq='st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),pow(min(W/2,H/2),2)),255,0)':H:W; [0:v][mask]alphamerge matches no streams.


    


    ffmpeg -i video.mp4 -filter_complex "[0]geq=lum_expr='st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),pow(min(W/2,H/2),2)),255,0)':H:W; [0:v][mask]alphamerge" out.mp4


    


    [mov,mp4,m4a,3gp,3g2,mj2 @ 000001bfd9218e80] Invalid stream specifier: mask.
    Last message repeated 1 times
Stream specifier 'mask' in filtergraph description [0]geq=lum_expr='st(3,pow(X-(W/2),2)+pow(Y-(H/2),2));if(lte(ld(3),pow(min(W/2,H/2),2)),255,0)':H:W; [0:v][mask]alphamerge matches no streams.


    


    And one more time, guys, if you are going to post some "prepared image mask" solution - just leave, the question is about creating mask on air.

    


    So, let's say we have red square (yes, ratio is static, always 1:1), yes, i can't post it because i dont have 10 rep. (...).

    


    https://i.sstatic.net/MsL71.png - red square.

    


    https://i.sstatic.net/aIFEV.png - circle

    


    https://i.sstatic.net/R8EAx.png - result

    


    https://i.sstatic.net/WtqQg.png - final result

    


    I actually want to get the answer from @Gyan or @llogan because i searched a lot and only these two guys do understand how to make things programmatically.

    


    More tech details :
Aspect ratio is constant - 1:1, width and height should be taken from the video in auto way, we need to create a white square with transparent circle inside it, the end result must contain "rounded" video with white background.

    


  • Using FFMPEG, how do we add subtitles in the black bar area or under the video ?

    26 septembre 2020, par DunceDancer

    I followed these steps :

    


      

    1. Added the black bars

      


      -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080 :(ow-iw)/2 :(oh-ih)/2,setsar=1" Source :How to add black borders to video

      


    2. 


    3. Added the subtitles ("burned" it into the video)

      


      ffmpeg -i "imput.mp4" -lavfi "subtitles=subtitles.srt:force_style='Alignment=0,OutlineColour=&H100000000,BorderStyle=3,Outline=1,Shadow=0,Fontsize=18,MarginL=5,MarginV=25'" -crf 1 -c:a copy "output.mp4" Source : ffmpeg subtitles alignment and position

      


    4. 


    


    Now I am stuck as to how to place the subtitles under the video or in the black screen.

    


    Edit : Screenshot added to clarify

    


    Screenshot of the Problem

    


  • dnn_backend_native_layer_mathunary : add floor support

    6 août 2020, par Mingyu Yin
    dnn_backend_native_layer_mathunary : add floor support
    

    It can be tested with the model generated with below python script :

    import tensorflow as tf
    import os
    import numpy as np
    import imageio
    from tensorflow.python.framework import graph_util
    name = 'floor'

    pb_file_path = os.getcwd()
    if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)) :
    os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))

    with tf.Session(graph=tf.Graph()) as sess :
    in_img = imageio.imread('detection.jpg')
    in_img = in_img.astype(np.float32)
    in_data = in_img[np.newaxis, :]
    input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y_ = tf.math.floor(input_x*255)/255
    y = tf.identity(y_, name='dnn_out')
    sess.run(tf.global_variables_initializer())
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])

    with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f :
    f.write(constant_graph.SerializeToString())

    print("model.pb generated, please in ffmpeg path use\n \n \
    python tools/python/convert.py {}_savemodel/model.pb —outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name))

    output = sess.run(y, feed_dict= input_x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n \
    or\n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n \
    or \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name))

    Signed-off-by : Mingyu Yin <mingyu.yin@intel.com>

    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] tests/dnn/dnn-layer-mathunary-test.c
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py