Recherche avancée

Médias (0)

Mot : - Tags -/latitude

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (69)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (11862)

  • Android ffmpeg adb shell Unknown encoder 'libx264'

    25 novembre 2017, par IChp

    When I run ffmpeg on android by adb shell, it shows this error :

    Duration: 00:00:12.00, start: 0.000000, bitrate: 30412 kb/s Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 352x288, 30412 kb/s, 25 tbr, 25 tbn, 25 tbc
    [4;31mUnknown encoder 'libx264'

    I don’t understand what went wrong. It bothered me for a lot of days.
    Can you help me out ? Thanks in advance !

    (I pushed the compiled libffmpeg.so to /system/lib and pushed ffmpeg to /system/bin)

    Target : compile ffmpeg with x264, and run libffmpeg.so on android device by adb shell.

    Compiled environment : Ubuntu16.0 32bit,ndk r10b 32bit platform 15, ffmpeg 3.0,x264 latest.

    My configure :

    cd ffmpeg-3.0.9
       export NDK=/home/ichp/project/android-ndk-r10b  
       export PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt  
       export PLATFORM=$NDK/platforms/android-15/arch-arm  
       export PREFIX=../simplefflib
       export CURRENT_PATH=/home/ichp/project/FREYA-LIVE-LIBRARY-OPTIMIZER-FOR-ANDROID
     ./configure --target-os=linux --prefix=$PREFIX
    --enable-cross-compile --enable-runtime-cpudetect --enable-asm --arch=arm --cpu=armv7-a --enable-libx264 --enable-encoder=libx264 --disable-encoders --disable-protocols --enable-protocol=file --enable-version3 --cc=$PREBUILT/linux-x86/bin/arm-linux-androideabi-gcc --cross-prefix=$PREBUILT/linux-x86/bin/arm-linux-androideabi- --disable-stripping --nm=$PREBUILT/linux-x86/bin/arm-linux-androideabi-nm --sysroot=$PLATFORM --enable-gpl --disable-shared --enable-static --enable-small --disable-ffprobe --disable-ffplay --enable-ffmpeg --disable-ffserver --disable-debug --enable-pthreads --enable-neon --extra-cflags="-I$CURRENT_PATH/temp/armeabi-v7a/include -fPIC -marm -DANDROID -DNDEBUG -static -O3 -march=armv7-a -mfpu=neon -mtune=generic-armv7-a -mfloat-abi=softfp -ftree-vectorize -mvectorize-with-neon-quad -ffast-math" --extra-ldflags="-L$CURRENT_PATH/temp/armeabi-v7a/lib"


    make clean
    make  
    make install
  • Getting video properties with Python without calling external software

    24 juillet 2019, par ullix

    [Update :] Yes, it is possible, now some 20 months later. See Update3 below ! [/update]

    Is that really impossible ? All I could find were variants of calling FFmpeg (or other software). My current solution is shown below, but what I really would like to get for portability is a Python-only solution that doesn’t require users to install additional software.

    After all, I can easily play videos using PyQt’s Phonon, yet I can’t get simply things like dimension or duration of the video ?

    My solution uses ffmpy (http://ffmpy.readthedocs.io/en/latest/ffmpy.html ) which is a wrapper for FFmpeg and FFprobe (http://trac.ffmpeg.org/wiki/FFprobeTips). Smoother than other offerings, yet it still requires an additional FFmpeg installation.

       import ffmpy, subprocess, json
       ffprobe = ffmpy.FFprobe(global_options="-loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0", inputs={"myvideo.mp4": None})
       print("ffprobe.cmd:", ffprobe.cmd)  # printout the resulting ffprobe shell command
       stdout, stderr = ffprobe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE)
       # std* is byte sequence, but json in Python 3.5.2 requires str
       ff0string = str(stdout,'utf-8')

       ffinfo = json.loads(ff0string)
       print(json.dumps(ffinfo, indent=4)) # pretty print

       print("Video Dimensions: {}x{}".format(ffinfo["streams"][0]["width"], ffinfo["streams"][0]["height"]))
       print("Streams Duration:", ffinfo["streams"][0]["duration"])
       print("Format Duration: ", ffinfo["format"]["duration"])

    Results in output :

       ffprobe.cmd: ffprobe -loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0 -i myvideo.mp4
       {
           "streams": [
               {
                   "duration": "0:00:32.033333",
                   "width": 1920,
                   "height": 1080
               }
           ],
           "programs": [],
           "format": {
               "duration": "0:00:32.064000"
           }
       }
       Video Dimensions: 1920x1080
       Streams Duration: 0:00:32.033333
       Format Duration:  0:00:32.064000

    UPDATE after several days of experimentation : The hachoire solution as proposed by Nick below does work, but will give you a lot of headaches, as the hachoire responses are too unpredictable. Not my choice.

    With opencv coding couldn’t be any easier :

    import cv2
    vid = cv2.VideoCapture( picfilename)
    height = vid.get(cv2.CAP_PROP_FRAME_HEIGHT) # always 0 in Linux python3
    width  = vid.get(cv2.CAP_PROP_FRAME_WIDTH)  # always 0 in Linux python3
    print ("opencv: height:{} width:{}".format( height, width))

    The problem is that it works well on Python2 but not on Py3. Quote : "IMPORTANT NOTE : MacOS and Linux packages do not support video related functionality (not compiled with FFmpeg)" (https://pypi.python.org/pypi/opencv-python).

    On top of this it seems that opencv needs the presence of the binary packages of FFmeg at runtime (https://docs.opencv.org/3.3.1/d0/da7/videoio_overview.html).

    Well, if I need an installation of FFmpeg anyway, I can stick to my original ffmpy example shown above :-/

    Thanks for the help.

    UPDATE2 : master_q (see below) proposed MediaInfo. While this failed to work on my Linux system (see my comments), the alternative of using pymediainfo, a py wrapper to MediaInfo, did work. It is simple to use, but it takes 4 times longer than my initial ffprobe approach to obtain duration, width and height, and still needs external software, i.e. MediaInfo :

    from pymediainfo import MediaInfo
    media_info = MediaInfo.parse("myvideofile")
    for track in media_info.tracks:
       if track.track_type == 'Video':
           print("duration (millisec):", track.duration)
           print("width, height:", track.width, track.height)

    UPDATE3 : OpenCV is finally available for Python3, and is claimed to run on Linux, Win, and Mac ! It makes it really easy, and I verfied that external software - in particular ffmpeg - is NOT needed !

    First install OpenCV via Pip :

    pip install opencv-python

    Run in Python :

    import cv2
    cv2video = cv2.VideoCapture( videofilename)
    height = cv2video.get(cv2.CAP_PROP_FRAME_HEIGHT)
    width  = cv2video.get(cv2.CAP_PROP_FRAME_WIDTH)
    print ("Video Dimension: height:{} width:{}".format( height, width))

    framecount = cv2video.get(cv2.CAP_PROP_FRAME_COUNT )
    frames_per_sec = cv2video.get(cv2.CAP_PROP_FPS)
    print("Video duration (sec):", framecount / frames_per_sec)

    # equally easy to get this info from images
    cv2image = cv2.imread(imagefilename, flags=cv2.IMREAD_COLOR  )
    height, width, channel  = cv2image.shape
    print ("Image Dimension: height:{} width:{}".format( height, width))

    I also needed the first frame of a video as an image, and used ffmpeg for this to save the image in the file system. This also is easier with OpenCV :

    hasFrames, cv2image = cv2video.read()   # reads 1st frame
    cv2.imwrite("myfilename.png", cv2image) # extension defines image type

    But even better, as I need the image only in memory for use in the PyQt5 toolkit, I can directly read the cv2-image into an Qt-image :

    bytesPerLine = 3 * width
    # my_qt_image = QImage(cv2image, width, height, bytesPerLine, QImage.Format_RGB888) # may give false colors!
    my_qt_image = QImage(cv2image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped() # correct colors on my systems

    As OpenCV is a huge program, I was concerned about timing. Turned out, OpenCV was never behind the alternatives. I takes some 100ms to read a slide, all the rest combined takes never more than 10ms.

    I tested this successfully on Ubuntu Mate 16.04, 18.04, and 19.04, and on two different installations of Windows 10 Pro. (Did not have Mac avalable). I am really delighted about OpenCV !

    You can see it in action in my SlideSorter program, which allows to sort images and videos, preserve sort order, and present as slideshow. Available here : https://sourceforge.net/projects/slidesorter/

  • MPEG-DASH not working. MPD validation fails

    16 novembre 2017, par Marko36

    I am trying to serve video using MPEG-DASH. No success. I have tried the following :

    Following the instructions on webproject.org, using FFMPEG, I have created several variants of the original video and the DASH MPD manifest, containing metadata. However, the manifest does not validate using http://dashif.org/conformance.html. This validator itself is quite useless, as it provides unusable info about the error. I have found in a post from 2014, that one of the errors generated by FFMPEG is capital letters in some metadata (not a critical one, but could have been fixed for years !). Other errors detected, but not described. No tangible info from any of these other validators either : http://www-itec.uni-klu.ac.at/dash/?page_id=605 (produces rubbish info), https://github.com/Eyevinn/dash-validator-js (throws an exception)

    Following instructions on mozilla.org, produces the same non-working result, as the instructions are nearly identical (including same resolution*bitrate sets), except that Mozilla omits the use of dash.js, which is deemed necessary by the rest of the internet.

    This guide on Bitmovin, utilizing x264 and MP4Box does not work either. Going by the instructions, I have to recode the original x264 video twice. The final version of videos are in some cases twice the size of their intermediate versions and 720p video is actually larger than its 1080p, higher bitrate counterpart. No need to go further. (Yet, this is the only way that actually produced segments..)

    I have spent 3 days on the above, read about all there is on the web from the other frustrated adopters, and ran out of options. I would really apreciate some pro tips ! Thanks !