Recherche avancée

Médias (91)

Autres articles (68)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (6415)

  • No such file or directory when running an ffmpeg command from script

    23 novembre 2017, par A_Matar

    I have been trying to run this ffmpeg command from python script to generate video of certain length from a static image but I keep getting No such file or directory error !
    Here is my code :

    import subprocess

    def generate_white_vid (duration):
       ffmpeg_create_vid_from_static_img = 'ffmpeg -loop 1 -i /same_path/WhiteBackground.jpg -c:v libx264 -t %f -pix_fmt yuv420p -vf scale=1920:1080 /same_path/white_vid2.mp4' % duration
       print ffmpeg_create_vid_from_static_img
       pp = subprocess.Popen(ffmpeg_create_vid_from_static_img)
       pp.communicate()

    generate_white_vid(0.5)

    However when I run the same exact command :

    ffmpeg -loop 1 -i /same_path/WhiteBackground.jpg -t 0.500000 -pix_fmt yuv420p -vf scale=1920:1080 /same_path/white_vid2.mp4

    from the cli, it works just fine. Where am I missing up ?
    Here is the full trace :

    Traceback (most recent call last):
     File "gen.py", line 10, in <module>
       generate_white_vid(0.5)
     File "gen.py", line 7, in generate_white_vid
       pp = subprocess.Popen(ffmpeg_create_vid_from_static_img)
     File "/home/ubuntu/anaconda2/lib/python2.7/subprocess.py", line 390, in __init__
       errread, errwrite)
     File "/home/ubuntu/anaconda2/lib/python2.7/subprocess.py", line 1024, in _execute_child
       raise child_exception
    OSError: [Errno 2] No such file or directory
    </module>

    When I use a list to pass the ffmpeg commands parameters as following ffmpeg_create_vid_from_static_img = ['ffmpeg', '-loop', '1', '-i', '/same_path/WhiteBackground.jpg', '-c:v', 'libx264', '-t', duration, '-pix_fmt', 'yuv420p', '-vf', 'scale=1920:1080', '/same_path/white_vid.mp4'] , I get a type error :

    TypeError                                 Traceback (most recent call last)
    in <module>()
    ----> 1 generate_white_vid(0.5)

    in generate_white_vid(duration)
         3     ffmpeg_create_vid_from_static_img = ['ffmpeg', '-loop', '1', '-i', '/home/ubuntu/matar/multispectral/WhiteBackground.jpg', '-c:v', 'libx264', '-t', duration, '-pix_fmt yuv420p', '-vf', 'scale=1920:1080', '/home/ubuntu/matar/multispectral/white_vid.mp4']
         4     print ffmpeg_create_vid_from_static_img
    ----> 5     pp = subprocess.Popen(ffmpeg_create_vid_from_static_img)
         6     pp.communicate()

    /home/ubuntu/anaconda2/lib/python2.7/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
       388                                 p2cread, p2cwrite,
       389                                 c2pread, c2pwrite,
    --> 390                                 errread, errwrite)
       391         except Exception:
       392             # Preserve original exception in case os.close raises.

    /home/ubuntu/anaconda2/lib/python2.7/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
      1022                         raise
      1023                 child_exception = pickle.loads(data)
    -> 1024                 raise child_exception
      1025
      1026

    TypeError: execv() arg 2 must contain only strings
    </module>
  • Getting video properties with Python without calling external software

    24 juillet 2019, par ullix

    [Update :] Yes, it is possible, now some 20 months later. See Update3 below ! [/update]

    Is that really impossible ? All I could find were variants of calling FFmpeg (or other software). My current solution is shown below, but what I really would like to get for portability is a Python-only solution that doesn’t require users to install additional software.

    After all, I can easily play videos using PyQt’s Phonon, yet I can’t get simply things like dimension or duration of the video ?

    My solution uses ffmpy (http://ffmpy.readthedocs.io/en/latest/ffmpy.html ) which is a wrapper for FFmpeg and FFprobe (http://trac.ffmpeg.org/wiki/FFprobeTips). Smoother than other offerings, yet it still requires an additional FFmpeg installation.

       import ffmpy, subprocess, json
       ffprobe = ffmpy.FFprobe(global_options="-loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0", inputs={"myvideo.mp4": None})
       print("ffprobe.cmd:", ffprobe.cmd)  # printout the resulting ffprobe shell command
       stdout, stderr = ffprobe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE)
       # std* is byte sequence, but json in Python 3.5.2 requires str
       ff0string = str(stdout,'utf-8')

       ffinfo = json.loads(ff0string)
       print(json.dumps(ffinfo, indent=4)) # pretty print

       print("Video Dimensions: {}x{}".format(ffinfo["streams"][0]["width"], ffinfo["streams"][0]["height"]))
       print("Streams Duration:", ffinfo["streams"][0]["duration"])
       print("Format Duration: ", ffinfo["format"]["duration"])

    Results in output :

       ffprobe.cmd: ffprobe -loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0 -i myvideo.mp4
       {
           "streams": [
               {
                   "duration": "0:00:32.033333",
                   "width": 1920,
                   "height": 1080
               }
           ],
           "programs": [],
           "format": {
               "duration": "0:00:32.064000"
           }
       }
       Video Dimensions: 1920x1080
       Streams Duration: 0:00:32.033333
       Format Duration:  0:00:32.064000

    UPDATE after several days of experimentation : The hachoire solution as proposed by Nick below does work, but will give you a lot of headaches, as the hachoire responses are too unpredictable. Not my choice.

    With opencv coding couldn’t be any easier :

    import cv2
    vid = cv2.VideoCapture( picfilename)
    height = vid.get(cv2.CAP_PROP_FRAME_HEIGHT) # always 0 in Linux python3
    width  = vid.get(cv2.CAP_PROP_FRAME_WIDTH)  # always 0 in Linux python3
    print ("opencv: height:{} width:{}".format( height, width))

    The problem is that it works well on Python2 but not on Py3. Quote : "IMPORTANT NOTE : MacOS and Linux packages do not support video related functionality (not compiled with FFmpeg)" (https://pypi.python.org/pypi/opencv-python).

    On top of this it seems that opencv needs the presence of the binary packages of FFmeg at runtime (https://docs.opencv.org/3.3.1/d0/da7/videoio_overview.html).

    Well, if I need an installation of FFmpeg anyway, I can stick to my original ffmpy example shown above :-/

    Thanks for the help.

    UPDATE2 : master_q (see below) proposed MediaInfo. While this failed to work on my Linux system (see my comments), the alternative of using pymediainfo, a py wrapper to MediaInfo, did work. It is simple to use, but it takes 4 times longer than my initial ffprobe approach to obtain duration, width and height, and still needs external software, i.e. MediaInfo :

    from pymediainfo import MediaInfo
    media_info = MediaInfo.parse("myvideofile")
    for track in media_info.tracks:
       if track.track_type == 'Video':
           print("duration (millisec):", track.duration)
           print("width, height:", track.width, track.height)

    UPDATE3 : OpenCV is finally available for Python3, and is claimed to run on Linux, Win, and Mac ! It makes it really easy, and I verfied that external software - in particular ffmpeg - is NOT needed !

    First install OpenCV via Pip :

    pip install opencv-python

    Run in Python :

    import cv2
    cv2video = cv2.VideoCapture( videofilename)
    height = cv2video.get(cv2.CAP_PROP_FRAME_HEIGHT)
    width  = cv2video.get(cv2.CAP_PROP_FRAME_WIDTH)
    print ("Video Dimension: height:{} width:{}".format( height, width))

    framecount = cv2video.get(cv2.CAP_PROP_FRAME_COUNT )
    frames_per_sec = cv2video.get(cv2.CAP_PROP_FPS)
    print("Video duration (sec):", framecount / frames_per_sec)

    # equally easy to get this info from images
    cv2image = cv2.imread(imagefilename, flags=cv2.IMREAD_COLOR  )
    height, width, channel  = cv2image.shape
    print ("Image Dimension: height:{} width:{}".format( height, width))

    I also needed the first frame of a video as an image, and used ffmpeg for this to save the image in the file system. This also is easier with OpenCV :

    hasFrames, cv2image = cv2video.read()   # reads 1st frame
    cv2.imwrite("myfilename.png", cv2image) # extension defines image type

    But even better, as I need the image only in memory for use in the PyQt5 toolkit, I can directly read the cv2-image into an Qt-image :

    bytesPerLine = 3 * width
    # my_qt_image = QImage(cv2image, width, height, bytesPerLine, QImage.Format_RGB888) # may give false colors!
    my_qt_image = QImage(cv2image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped() # correct colors on my systems

    As OpenCV is a huge program, I was concerned about timing. Turned out, OpenCV was never behind the alternatives. I takes some 100ms to read a slide, all the rest combined takes never more than 10ms.

    I tested this successfully on Ubuntu Mate 16.04, 18.04, and 19.04, and on two different installations of Windows 10 Pro. (Did not have Mac avalable). I am really delighted about OpenCV !

    You can see it in action in my SlideSorter program, which allows to sort images and videos, preserve sort order, and present as slideshow. Available here : https://sourceforge.net/projects/slidesorter/

  • examples : Add a VA-API encode example.

    6 novembre 2017, par Jun Zhao
    examples : Add a VA-API encode example.
    

    Supports only raw NV12 input.

    Example use :
    ./vaapi_encode 1920 1080 test.yuv test.h264

    Signed-off-by : Jun Zhao <jun.zhao@intel.com>
    Signed-off-by : Liu, Kaixuan <kaixuan.liu@intel.com>
    Signed-off-by : Mark Thompson <sw@jkqxz.net>

    • [DH] configure
    • [DH] doc/examples/Makefile
    • [DH] doc/examples/vaapi_encode.c