Recherche avancée

Médias (1)

Mot : - Tags -/framasoft

Autres articles (105)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7813)

  • FFMPEG images to video with reverse sequence with other filters

    4 juillet 2019, par Archimedes Trajano

    Similar to this ffmpeg - convert image sequence to video with reversed order

    But I was wondering if I can create a video loop by specifying the image range and have the reverse order appended in one command.

    Ideally I’d like to combine it with this Make an Alpha Mask video from PNG files

    What I am doing now is generating the reverse using https://stackoverflow.com/a/43301451/242042 and combining the video files together.

    However, I am thinking it would be similar to Concat a video with itself, but in reverse, using ffmpeg

    My current attempt was assuming 60 images. which makes vframes x2

    ffmpeg -y -framerate 20 -f image2 -i \
     running_gear/%04d.png -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
     -filter_complex alphaextract[a]
     -map 0:v -b:v 5M -crf 20 running_gear.webm
     -map [a] -b:v 5M -crf 20 running_gear-alpha.web

    Without the alpha masking I can get it working using

    ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
     -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
     -map "[v]" -b:v 5M -crf 20 running_gear.webm

    With just the alpha masking I can do

    ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
     -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]alphaextract[a]"
     -map [a] -b:v 5M -crf 20 alpha.webm

    So I am trying to do it so the alpha mask is done at the same time.

    Although my ultimate ideal would be to take the images, reverse it get an alpha mask and put it side-by-side so it can be used in Ren’py

  • Open CV Codec FFMPEG Error fallback to use tag 0x7634706d/'mp4v'

    22 mai 2019, par Cohen

    Doing a filter recording and all is fine. The code is running, but at the end the video is not saved as MP4. I have this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

    Using a MAC and the code is running correctly, but is not saving. I tried to find more details about this error, but wasn’t so fortunate. I use as editor Sublime. The code run on Atom tough but is giving this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
    2018-05-28 15:04:25.274 Python[17483:2224774] AVF: AVAssetWriter status: Cannot create file

    ....

    import numpy as np
    import cv2
    import random
    from utils import CFEVideoConf, image_resize
    import glob
    import math


    cap = cv2.VideoCapture(0)

    frames_per_seconds = 24
    save_path='saved-media/filter.mp4'
    config = CFEVideoConf(cap, filepath=save_path, res='360p')
    out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims)


    def verify_alpha_channel(frame):
       try:
           frame.shape[3] # looking for the alpha channel
       except IndexError:
           frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       return frame


    def apply_hue_saturation(frame, alpha, beta):
       hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
       h, s, v = cv2.split(hsv_image)
       s.fill(199)
       v.fill(255)
       hsv_image = cv2.merge([h, s, v])

       out = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
       frame = verify_alpha_channel(frame)
       out = verify_alpha_channel(out)
       cv2.addWeighted(out, 0.25, frame, 1.0, .23, frame)
       return frame


    def apply_color_overlay(frame, intensity=0.5, blue=0, green=0, red=0):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (blue, green, red, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def apply_sepia(frame, intensity=0.5):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (20, 66, 112, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def alpha_blend(frame_1, frame_2, mask):
       alpha = mask/255.0
       blended = cv2.convertScaleAbs(frame_1*(1-alpha) + frame_2*alpha)
       return blended


    def apply_circle_focus_blur(frame, intensity=0.2):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       y = int(frame_h/2)
       x = int(frame_w/2)

       mask = np.zeros((frame_h, frame_w, 4), dtype='uint8')
       cv2.circle(mask, (x, y), int(y/2), (255,255,255), -1, cv2.LINE_AA)
       mask = cv2.GaussianBlur(mask, (21,21),11 )

       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, 255-mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def portrait_mode(frame):
       cv2.imshow('frame', frame)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       _, mask = cv2.threshold(gray, 120,255,cv2.THRESH_BINARY)

       mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGRA)
       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def apply_invert(frame):
       return cv2.bitwise_not(frame)

    while(True):
       # Capture frame-by-frame
       ret, frame = cap.read()
       frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       #cv2.imshow('frame',frame)


       hue_sat = apply_hue_saturation(frame.copy(), alpha=3, beta=3)
       cv2.imshow('hue_sat', hue_sat)

       sepia = apply_sepia(frame.copy(), intensity=.8)
       cv2.imshow('sepia',sepia)

       color_overlay = apply_color_overlay(frame.copy(), intensity=.8, red=123, green=231)
       cv2.imshow('color_overlay',color_overlay)

       invert = apply_invert(frame.copy())
       cv2.imshow('invert', invert)

       blur_mask = apply_circle_focus_blur(frame.copy())
       cv2.imshow('blur_mask', blur_mask)

       portrait = portrait_mode(frame.copy())
       cv2.imshow('portrait',portrait)

       if cv2.waitKey(20) & 0xFF == ord('q'):
           break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()
  • Extract alpha from video ffmpeg

    23 octobre 2018, par Yarik Denisyk

    I want to overlay transparent video on the background image. I have a video where the top half is RGB object and bottom half is an alpha mask.

    Now, for making this I do next steps :

    1) I am extracting all frames from video and save to the folder

    2) Each frame splitting to top and bottom half bitmap

    3) Top bitmap composite with bottom mask for extract alpha and get a frame with transparent background

    3) I am drawing each frame on the background and save to a folder

    4) Create a video using FFmpeg

    The problem is step 2, 3 and 4, they very slow. Maybe has another way to overlay transparent video on the background image ?