Recherche avancée

Médias (91)

Autres articles (65)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (10441)

  • How can I reencode a video to match another's codec exactly ?

    24 janvier 2020, par Stephen Schrauger

    When I’m on vacation, I usually use our camcorder to record videos. Since they’re all the same format, I can use ffmpeg to concat them into one large, smooth video without re-encoding.

    However, sometimes I will use a phone or other camera to record a video (if the camcorder ran out of space/battery or was left at a hotel).

    I’d like to determine the codec, framerate, etc used by my camcorder and use those parameters to convert the phone vidoes into the same format. That way, I will be able to concatonate all the videos without re-encoding the camcorder videos.

    Using ffprobe, I found my camcorder has this encoding :

     Input #0, mpegts, from 'camcorderfile.MTS':
     Duration: 00:00:09.54, start: 1.936367, bitrate: 24761 kb/s
     Program 1
       Stream #0:0[0x1011]: Video: h264 (High) (HDPR / 0x52504448), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
       Stream #0:1[0x1100]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 256 kb/s
       Stream #0:2[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080

    The phone (iPhone 5s) encoding is :

     Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'mov.MOV':
     Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       creation_time   : 2017-01-02T03:04:05.000000Z
       com.apple.quicktime.location.ISO6709: +12.3456-789.0123+456.789/
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 5s
       com.apple.quicktime.software: 10.2.1
       com.apple.quicktime.creationdate: 2017-01-02T03:04:05-0700
     Duration: 00:00:14.38, start: 0.000000, bitrate: 11940 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 11865 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
         encoder         : H.264
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 63 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
       Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
       Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler

    I’m presuming that ffmpeg will automatically take any acceptable video format, and that I only need to figure out the output settings. I think I need to use -s 1920x1080 and -pix_fmt yuv420p for the output, but what other flags do I need in order to make the phone video into the same encoding as the camcorder video ?

    Can I get some pointers as to how I can translate the ffprobe output into the flags I need to give to ffmpeg ?

    Edit : Added the entire Input #0 for both media files.

  • Recording view to videofile

    4 février 2016, par Paul Freez

    I have a some kind of a Texture/Surfaceview where user can draw (lines, figures and so on) with gestures and I need to record everything he performed and save that to a videofile on storage.

    At first, I was trying to use ffmpeg :

    1. Capture bitmaps(frames) of the specific view (using canvas)
    2. Save every bitmap to the storage in some folder
    3. Using ffmpeg compress all the saved images(frames) from the folder to the mp4 videofile.

    That way didn’t work for me, because only 1 operation of getting bitmap from the view (not even saving it) would take about 60ms - so video has even less than 20fps. Plus I needed to save those frames/bitmaps on the device - with each image compressed to about 1Mb it would be needed to write more than 15Mb of images every second !
    May be I’ve missed some other uproach to use ffmpeg in runtime while recording, but I haven’t find the way to do that.

    The second way I found was to record screen via own Android API-tools which were introduced in Android 5.0. But, first of all, I’m targeting Android 4.1 ; and second, - this uproach would only allow me to record whole screen with whole UI on it (and I needed only 1 specific view). Also, screen capturing is enabled somehow on Android 4.4, but it also requires root.

    So, after all the researches I made, I’ve found very interesting on app - Coach’s Eye. It allows to draw figures on existing videos, and record all that (+ recording sound). And as far as I can see it captures not the whole screen with UI and stuff, but only the video and top layer with drawing canvas. So I’m curious is there any approach to do like in that app ?

    My basic requirements are :

    • Android 4.1
    • No root
    • Capture specific view
    • 25fps or more

    If you have any ideas of how this can be done, please feel free to share !

  • vf_dnn_processing : add support for more formats gray8 and grayf32

    27 décembre 2019, par Guo, Yejun
    vf_dnn_processing : add support for more formats gray8 and grayf32
    

    The following is a python script to halve the value of the gray
    image. It demos how to setup and execute dnn model with python+tensorflow.
    It also generates .pb file which will be used by ffmpeg.

    import tensorflow as tf
    import numpy as np
    from skimage import color
    from skimage import io
    in_img = io.imread('input.jpg')
    in_img = color.rgb2gray(in_img)
    io.imsave('ori_gray.jpg', np.squeeze(in_img))
    in_data = np.expand_dims(in_img, axis=0)
    in_data = np.expand_dims(in_data, axis=3)
    filter_data = np.array([0.5]).reshape(1,1,1,1).astype(np.float32)
    filter = tf.Variable(filter_data)
    x = tf.placeholder(tf.float32, shape=[1, None, None, 1], name='dnn_in')
    y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
    sess=tf.Session()
    sess.run(tf.global_variables_initializer())
    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'halve_gray_float.pb', as_text=False)
    print("halve_gray_float.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate halve_gray_float.model\n")
    output = sess.run(y, feed_dict=x : in_data)
    output = output * 255.0
    output = output.astype(np.uint8)
    io.imsave("out.jpg", np.squeeze(output))

    To do the same thing with ffmpeg :
    - generate halve_gray_float.pb with the above script
    - generate halve_gray_float.model with tools/python/convert.py
    - try with following commands
    ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
    ./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow out.tf.png

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c