Recherche avancée

Médias (91)

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (13424)

  • Open CV Codec FFMPEG Error fallback to use tag 0x7634706d/'mp4v'

    22 mai 2019, par Cohen

    Doing a filter recording and all is fine. The code is running, but at the end the video is not saved as MP4. I have this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

    Using a MAC and the code is running correctly, but is not saving. I tried to find more details about this error, but wasn’t so fortunate. I use as editor Sublime. The code run on Atom tough but is giving this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
    2018-05-28 15:04:25.274 Python[17483:2224774] AVF: AVAssetWriter status: Cannot create file

    ....

    import numpy as np
    import cv2
    import random
    from utils import CFEVideoConf, image_resize
    import glob
    import math


    cap = cv2.VideoCapture(0)

    frames_per_seconds = 24
    save_path='saved-media/filter.mp4'
    config = CFEVideoConf(cap, filepath=save_path, res='360p')
    out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims)


    def verify_alpha_channel(frame):
       try:
           frame.shape[3] # looking for the alpha channel
       except IndexError:
           frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       return frame


    def apply_hue_saturation(frame, alpha, beta):
       hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
       h, s, v = cv2.split(hsv_image)
       s.fill(199)
       v.fill(255)
       hsv_image = cv2.merge([h, s, v])

       out = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
       frame = verify_alpha_channel(frame)
       out = verify_alpha_channel(out)
       cv2.addWeighted(out, 0.25, frame, 1.0, .23, frame)
       return frame


    def apply_color_overlay(frame, intensity=0.5, blue=0, green=0, red=0):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (blue, green, red, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def apply_sepia(frame, intensity=0.5):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (20, 66, 112, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def alpha_blend(frame_1, frame_2, mask):
       alpha = mask/255.0
       blended = cv2.convertScaleAbs(frame_1*(1-alpha) + frame_2*alpha)
       return blended


    def apply_circle_focus_blur(frame, intensity=0.2):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       y = int(frame_h/2)
       x = int(frame_w/2)

       mask = np.zeros((frame_h, frame_w, 4), dtype='uint8')
       cv2.circle(mask, (x, y), int(y/2), (255,255,255), -1, cv2.LINE_AA)
       mask = cv2.GaussianBlur(mask, (21,21),11 )

       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, 255-mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def portrait_mode(frame):
       cv2.imshow('frame', frame)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       _, mask = cv2.threshold(gray, 120,255,cv2.THRESH_BINARY)

       mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGRA)
       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def apply_invert(frame):
       return cv2.bitwise_not(frame)

    while(True):
       # Capture frame-by-frame
       ret, frame = cap.read()
       frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       #cv2.imshow('frame',frame)


       hue_sat = apply_hue_saturation(frame.copy(), alpha=3, beta=3)
       cv2.imshow('hue_sat', hue_sat)

       sepia = apply_sepia(frame.copy(), intensity=.8)
       cv2.imshow('sepia',sepia)

       color_overlay = apply_color_overlay(frame.copy(), intensity=.8, red=123, green=231)
       cv2.imshow('color_overlay',color_overlay)

       invert = apply_invert(frame.copy())
       cv2.imshow('invert', invert)

       blur_mask = apply_circle_focus_blur(frame.copy())
       cv2.imshow('blur_mask', blur_mask)

       portrait = portrait_mode(frame.copy())
       cv2.imshow('portrait',portrait)

       if cv2.waitKey(20) & 0xFF == ord('q'):
           break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()
  • FFMPEG images to video with reverse sequence with other filters

    4 juillet 2019, par Archimedes Trajano

    Similar to this ffmpeg - convert image sequence to video with reversed order

    But I was wondering if I can create a video loop by specifying the image range and have the reverse order appended in one command.

    Ideally I’d like to combine it with this Make an Alpha Mask video from PNG files

    What I am doing now is generating the reverse using https://stackoverflow.com/a/43301451/242042 and combining the video files together.

    However, I am thinking it would be similar to Concat a video with itself, but in reverse, using ffmpeg

    My current attempt was assuming 60 images. which makes vframes x2

    ffmpeg -y -framerate 20 -f image2 -i \
     running_gear/%04d.png -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
     -filter_complex alphaextract[a]
     -map 0:v -b:v 5M -crf 20 running_gear.webm
     -map [a] -b:v 5M -crf 20 running_gear-alpha.web

    Without the alpha masking I can get it working using

    ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
     -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
     -map "[v]" -b:v 5M -crf 20 running_gear.webm

    With just the alpha masking I can do

    ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
     -start_number 0 -vframes 120 \
     -filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]alphaextract[a]"
     -map [a] -b:v 5M -crf 20 alpha.webm

    So I am trying to do it so the alpha mask is done at the same time.

    Although my ultimate ideal would be to take the images, reverse it get an alpha mask and put it side-by-side so it can be used in Ren’py

  • Python ffmpeg subprocess makes unplayable file, but is right size, and just hangs

    8 juin 2021, par nadermx

    I currently have this subprocess calling ffmpeg.

    


    print("Starting alphamerge")
cmd = "ffmpeg -y -nostats -loglevel 0 -i %s -i %s -filter_complex '[1][0]scale2ref[mask][main];[main][mask]alphamerge' -c:v qtrle %s" % (
            file_path, temp_file, output)
process = sp.Popen(cmd, shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
stdout, stderr = process.communicate()
print('after call')

if stderr:
   return "ERROR: %s" % stderr.decode("utf-8")
print("Process finished")




    


    But the process ends up making a file over 2 gigs, unplayable, and just hangs. It never prints "Process finished", "after call", or an error, it just hangs.

    


    Am I calling subprocess with ffmpeg wrong ?