Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (67)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (8924)

  • FFmpeg : why we need to create temp files during conversion

    29 mars 2019, par Awais fiaz

    I have been studying about ffmpeg, its properties and its usage. I have started using a pre-made PHP based script which creates temp files during conversion. I am trying to figure out what it actually does in this given command. Why is this command creating temp files during conversion and what is the purpose of it ?

    /usr/bin/ffmpeg -y -i /var/www/html/conversion_server/files/conversion_queue/15270581986ece98.mp4 -f mp4  -vcodec libx264 -preset superfast -r 23.97598565277  -maxrate 320000 -g 60 -crf 29 -profile:v baseline  -s 426x240 -aspect 1.77  -acodec libfdk_aac -ab 320k -ar 44100  /var/www/html/conversion_server/files/videos/2018/05/23/15270581986ece98/15270581986ece98-240.mp4 2> /var/www/html/conversion_server/files/temp/1527058199834e4.tmp

    Any help regarding this would be really appreciated.

  • Command for putting watermark on video

    28 juin 2018, par Yupi

    I tried to put watermark on one video but FFmpeg command won’t execute and error code is 3037. I run the same code for trimming video and video is trimmed successfully so there is no issues with inputpath or outputpath also I have ic_watermark.png in assets folder. I tried with image from Drawable but same error code.

    So here is the command which I tried to run and put watermark in right corner :

    String[] cmd = new String[]{"-i", videoInputPath, "-i", imagePath, "-filter_complex", "overlay=main_w-overlay_w-5:main_h-overlay_h-5", videoOutPath };

    and this is the whole method :

    private void executeFFmepg(String inputPath, String outputPath, String customCommand){
    final Command command = videoKit.createCommand()
           .overwriteOutput()
           .inputPath(inputPath)
           .outputPath(outputPath)
           .customCommand(customCommand)
           .experimentalFlag()
           .build();
    new AsyncCommandExecutor(command, this).execute();
    }

    I used one library based on FFmpeg : https://github.com/inFullMobile/videokit-ffmpeg-android
    and description says that this is basically invoking FFmpeg main() with CLI arguments.

    This is what I get from Log :

    ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.8 (GCC)
     configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    05-29 15:35:08.591 24037-24037/com.cleatchaser D/FFmpeg: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/DCIM/Camera/20180406_140202.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 0
    05-29 15:35:08.596 24037-24037/com.cleatchaser D/FFmpeg:     compatible_brands: isom3gp4
       creation_time   : 2018-04-06 12:02:25
     Duration: 00:00:15.06, start: 0.000000, bitrate: 17185 kb/s
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 17029 kb/s, 29.95 fps, 30 tbr, 90k tbn, 180k tbc (default)
    05-29 15:35:08.601 24037-24037/com.cleatchaser D/FFmpeg:     Metadata:
         rotate          : 90
         creation_time   : 2018-04-06 12:02:25
         handler_name    : VideoHandle
       Side data:
    05-29 15:35:08.606 24037-24037/com.cleatchaser D/FFmpeg:       displaymatrix: rotation of -90.00 degrees
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 123 kb/s (default)
       Metadata:
         creation_time   : 2018-04-06 12:02:25
    05-29 15:35:08.611 24037-24037/com.cleatchaser D/FFmpeg:       handler_name    : SoundHandle
    05-29 15:35:08.756 24037-24037/com.cleatchaser D/FFmpeg: Input #1, png_pipe, from '/storage/emulated/0/watermark.png':
     Duration: N/A, bitrate: N/A
       Stream #1:0: Video: png, rgba(pc), 856x1324, 25 tbr, 25 tbn, 25 tbc

    I tried many answers from similar questions but none of them worked.

    Is possible that error is in quotes ?

    I don’t have experience with FFmpeg so any help would be very appreciated. Thanks

  • Open CV Codec FFMPEG Error fallback to use tag 0x7634706d/'mp4v'

    22 mai 2019, par Cohen

    Doing a filter recording and all is fine. The code is running, but at the end the video is not saved as MP4. I have this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

    Using a MAC and the code is running correctly, but is not saving. I tried to find more details about this error, but wasn’t so fortunate. I use as editor Sublime. The code run on Atom tough but is giving this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
    2018-05-28 15:04:25.274 Python[17483:2224774] AVF: AVAssetWriter status: Cannot create file

    ....

    import numpy as np
    import cv2
    import random
    from utils import CFEVideoConf, image_resize
    import glob
    import math


    cap = cv2.VideoCapture(0)

    frames_per_seconds = 24
    save_path='saved-media/filter.mp4'
    config = CFEVideoConf(cap, filepath=save_path, res='360p')
    out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims)


    def verify_alpha_channel(frame):
       try:
           frame.shape[3] # looking for the alpha channel
       except IndexError:
           frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       return frame


    def apply_hue_saturation(frame, alpha, beta):
       hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
       h, s, v = cv2.split(hsv_image)
       s.fill(199)
       v.fill(255)
       hsv_image = cv2.merge([h, s, v])

       out = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
       frame = verify_alpha_channel(frame)
       out = verify_alpha_channel(out)
       cv2.addWeighted(out, 0.25, frame, 1.0, .23, frame)
       return frame


    def apply_color_overlay(frame, intensity=0.5, blue=0, green=0, red=0):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (blue, green, red, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def apply_sepia(frame, intensity=0.5):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (20, 66, 112, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def alpha_blend(frame_1, frame_2, mask):
       alpha = mask/255.0
       blended = cv2.convertScaleAbs(frame_1*(1-alpha) + frame_2*alpha)
       return blended


    def apply_circle_focus_blur(frame, intensity=0.2):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       y = int(frame_h/2)
       x = int(frame_w/2)

       mask = np.zeros((frame_h, frame_w, 4), dtype='uint8')
       cv2.circle(mask, (x, y), int(y/2), (255,255,255), -1, cv2.LINE_AA)
       mask = cv2.GaussianBlur(mask, (21,21),11 )

       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, 255-mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def portrait_mode(frame):
       cv2.imshow('frame', frame)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       _, mask = cv2.threshold(gray, 120,255,cv2.THRESH_BINARY)

       mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGRA)
       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def apply_invert(frame):
       return cv2.bitwise_not(frame)

    while(True):
       # Capture frame-by-frame
       ret, frame = cap.read()
       frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       #cv2.imshow('frame',frame)


       hue_sat = apply_hue_saturation(frame.copy(), alpha=3, beta=3)
       cv2.imshow('hue_sat', hue_sat)

       sepia = apply_sepia(frame.copy(), intensity=.8)
       cv2.imshow('sepia',sepia)

       color_overlay = apply_color_overlay(frame.copy(), intensity=.8, red=123, green=231)
       cv2.imshow('color_overlay',color_overlay)

       invert = apply_invert(frame.copy())
       cv2.imshow('invert', invert)

       blur_mask = apply_circle_focus_blur(frame.copy())
       cv2.imshow('blur_mask', blur_mask)

       portrait = portrait_mode(frame.copy())
       cv2.imshow('portrait',portrait)

       if cv2.waitKey(20) & 0xFF == ord('q'):
           break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()