Recherche avancée

Médias (91)

Autres articles (21)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (3984)

  • ffmpeg is dropping frames of second video in concat

    17 juin 2018, par bugwheels94

    I did a screen share and when sharing the screen I minimized the sharing window so the video that I am sharing has some thing weird going on. Seeking does not work on mplayer but works on VLC. However, when doing the concat

    ffmpeg -i test.webm -f lavfi -i color=c=blue:s=1366x768:d=71 \
    -filter_complex "[0]scale=w='min(1366\, iw)':h='min(768\, ih)':force_original_aspect_ratio=decrease,\
    pad=width=1366:height=768:x=(ow-iw)/2:y=(oh-ih)/2,setsar=1/1[v1];\
    [1][v1]concat=n=2[v]" -map [v] -qscale 7 result.mp4

    The webm video of length 155 seconds is not showing complete. Ideally the result should be 71 + 156 = 227 but I am getting 155 seconds and the webm video of concat being cropped.

    If I reverse the concat order to [v1][1]concat=n=2[v] then everything is perfect at 227 seconds

    Download test.webm

    May you please tell me about what might be wrong here ? Thanks

  • Open CV Codec FFMPEG Error fallback to use tag 0x7634706d/'mp4v'

    22 mai 2019, par Cohen

    Doing a filter recording and all is fine. The code is running, but at the end the video is not saved as MP4. I have this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

    Using a MAC and the code is running correctly, but is not saving. I tried to find more details about this error, but wasn’t so fortunate. I use as editor Sublime. The code run on Atom tough but is giving this error :

    OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
    OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
    2018-05-28 15:04:25.274 Python[17483:2224774] AVF: AVAssetWriter status: Cannot create file

    ....

    import numpy as np
    import cv2
    import random
    from utils import CFEVideoConf, image_resize
    import glob
    import math


    cap = cv2.VideoCapture(0)

    frames_per_seconds = 24
    save_path='saved-media/filter.mp4'
    config = CFEVideoConf(cap, filepath=save_path, res='360p')
    out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims)


    def verify_alpha_channel(frame):
       try:
           frame.shape[3] # looking for the alpha channel
       except IndexError:
           frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       return frame


    def apply_hue_saturation(frame, alpha, beta):
       hsv_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
       h, s, v = cv2.split(hsv_image)
       s.fill(199)
       v.fill(255)
       hsv_image = cv2.merge([h, s, v])

       out = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2BGR)
       frame = verify_alpha_channel(frame)
       out = verify_alpha_channel(out)
       cv2.addWeighted(out, 0.25, frame, 1.0, .23, frame)
       return frame


    def apply_color_overlay(frame, intensity=0.5, blue=0, green=0, red=0):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (blue, green, red, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def apply_sepia(frame, intensity=0.5):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       sepia_bgra = (20, 66, 112, 1)
       overlay = np.full((frame_h, frame_w, 4), sepia_bgra, dtype='uint8')
       cv2.addWeighted(overlay, intensity, frame, 1.0, 0, frame)
       return frame


    def alpha_blend(frame_1, frame_2, mask):
       alpha = mask/255.0
       blended = cv2.convertScaleAbs(frame_1*(1-alpha) + frame_2*alpha)
       return blended


    def apply_circle_focus_blur(frame, intensity=0.2):
       frame = verify_alpha_channel(frame)
       frame_h, frame_w, frame_c = frame.shape
       y = int(frame_h/2)
       x = int(frame_w/2)

       mask = np.zeros((frame_h, frame_w, 4), dtype='uint8')
       cv2.circle(mask, (x, y), int(y/2), (255,255,255), -1, cv2.LINE_AA)
       mask = cv2.GaussianBlur(mask, (21,21),11 )

       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, 255-mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def portrait_mode(frame):
       cv2.imshow('frame', frame)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       _, mask = cv2.threshold(gray, 120,255,cv2.THRESH_BINARY)

       mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGRA)
       blured = cv2.GaussianBlur(frame, (21,21), 11)
       blended = alpha_blend(frame, blured, mask)
       frame = cv2.cvtColor(blended, cv2.COLOR_BGRA2BGR)
       return frame


    def apply_invert(frame):
       return cv2.bitwise_not(frame)

    while(True):
       # Capture frame-by-frame
       ret, frame = cap.read()
       frame = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
       #cv2.imshow('frame',frame)


       hue_sat = apply_hue_saturation(frame.copy(), alpha=3, beta=3)
       cv2.imshow('hue_sat', hue_sat)

       sepia = apply_sepia(frame.copy(), intensity=.8)
       cv2.imshow('sepia',sepia)

       color_overlay = apply_color_overlay(frame.copy(), intensity=.8, red=123, green=231)
       cv2.imshow('color_overlay',color_overlay)

       invert = apply_invert(frame.copy())
       cv2.imshow('invert', invert)

       blur_mask = apply_circle_focus_blur(frame.copy())
       cv2.imshow('blur_mask', blur_mask)

       portrait = portrait_mode(frame.copy())
       cv2.imshow('portrait',portrait)

       if cv2.waitKey(20) & 0xFF == ord('q'):
           break

    # When everything done, release the capture
    cap.release()
    cv2.destroyAllWindows()
  • How do I write audio and video to the same file using FFMPEG and C ?

    29 juin 2018, par benwiz

    I am consuming an audio file and a video file using ffmpeg in a C program. I am modifying both the audio and the video data. In working the code below I write both each of these streams to its own file. How can I write both streams to the same file ?

    #include
    #include
    #include

    // Video resolution
    #define W 1280
    #define H 720

    // Allocate a buffer to store one video frame
    unsigned char video_frame[H][W][3] = {0};

    int main()
    {
       // Audio pipes
       FILE *audio_pipein = popen("ffmpeg -i data/daft-punk.mp3 -f s16le -ac 1 -", "r");
       FILE *audio_pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - out/daft-punk.mp3", "w");

       // Video pipes
       FILE *video_pipein = popen("ffmpeg -i data/daft-punk.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
       FILE *video_pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 out/daft-punk.mp4", "w");

       // Audio vars
       int16_t audio_sample;
       int audio_count;
       int audio_n = 0;

       // Video vars
       int x = 0;
       int y = 0;
       int video_count = 0;

       // Read, modify, and write one audio_sample and video_frame at a time
       while (1)
       {
           // Audio
           audio_count = fread(&audio_sample, 2, 1, audio_pipein); // read one 2-byte audio_sample
           if (audio_count == 1)
           {
               ++audio_n;
               audio_sample = audio_sample * sin(audio_n * 5.0 * 2 * M_PI / 44100.0);
               fwrite(&audio_sample, 2, 1, audio_pipeout);
           }

           // Video
           video_count = fread(video_frame, 1, H * W * 3, video_pipein); // Read a frame from the input pipe into the buffer
           if (video_count == H * W * 3)                                 // Only modify and write if frame exists
           {
               for (y = 0; y < H; ++y)     // Process this frame
                   for (x = 0; x < W; ++x) // Invert each colour component in every pixel
                   {
                       video_frame[y][x][0] = 255 - video_frame[y][x][0]; // red
                       video_frame[y][x][1] = 255 - video_frame[y][x][1]; // green
                       video_frame[y][x][2] = 255 - video_frame[y][x][2]; // blue
                   }
               fwrite(video_frame, 1, H * W * 3, video_pipeout); // Write this frame to the output pipe
           }

           // Break if both complete
           if (audio_count != 1 && video_count != H * W * 3)
               break;
       }

       // Close audio pipes
       pclose(audio_pipein);
       pclose(audio_pipeout);

       // Close video pipes
       fflush(video_pipein);
       fflush(video_pipeout);
       pclose(video_pipein);
       pclose(video_pipeout);

       return 0;
    }

    I took the base for this code from this article.

    Thanks !