Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (59)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8996)

  • HTML Video Exporting Using MediaRecorder vs ffmpeg.js

    3 octobre 2020, par Owen Ovadoz

    TLDR

    


    Imagine I have one video and one image. I want to create another video that overlays the image (e.g. watermark) at the center for 2 seconds in the beginning of the video and export it as the final video. I need to do this on the client-side only. Is it possible to use MediaRecorder + Canvas or should I resort to using ffmpeg.js ?

    


    Context

    


    I am making a browser-based video editor where the user can upload videos and images and combine them. So far, I implemented this by embedding the video and images inside a canvas element appropriately. The data representation looks somewhat like this :

    


    video: {
  url: 'https://archive.com/video.mp4',
  duration: 34,
},
images: [{
  url: 'https://archive.com/img1.jpg',
  start_time: 0,
  end_time: 2,
  top: 30,
  left: 20,
  width: 50,
  height: 50,
}]


    


    Attempts

    


      

    1. I play the video and show/hide images in the canvas. Then, I can use the MediaRecorder to capture the canvas' stream and export it as a data blob at the end. The final output is as expected, but the problem with this approach is I need to play the video from the beginning to the end for me to capture the stream from the canvas. If the video is 60 seconds, exporting it also takes 60 seconds.
    2. 


    


    

    

    function record(canvas) {
  return new Promise(function (res, rej) {
    const stream = canvas.captureStream();
    const mediaRecorder = new MediaRecorder(stream);
    const recordedData = [];

    // Register recorder events
    mediaRecorder.ondataavailable = function (event) {
      recordedData.push(event.data);
    };
    mediaRecorder.onstop = function (event) {
      var blob = new Blob(recordedData, {
        type: "video/webm",
      });
      var url = URL.createObjectURL(blob);
      res(url);
    };

    // Start the video and start recording
    videoRef.current.currentTime = 0;
    videoRef.current.addEventListener(
      "play",
      (e) => {
        mediaRecorder.start();
      },
      {
        once: true,
      }
    );
    videoRef.current.addEventListener(
      "ended",
      (e) => {
        mediaRecorder.stop();
      },
      {
        once: true,
      }
    );
    videoRef.current.play();
  });
}

    


    


    



      

    1. I can use ffmpeg.js to encode the video. I haven't tried this method yet as I will have to convert my image representation into ffmpeg args (I wonder how much work that is).
    2. 


    


  • avcodec : add external enc libvvenc for H266/VVC

    5 juin 2024, par Thomas Siedel
    avcodec : add external enc libvvenc for H266/VVC
    

    Add external encoder VVenC for H266/VVC encoding.
    Register new encoder libvvenc.
    Add libvvenc to wrap the vvenc interface.
    libvvenc implements encoder option : preset,qp,qpa,period,
    passlogfile,stats,vvenc-params,level,tier.
    Enable encoder by adding —enable-libvvenc in configure step.

    Co-authored-by : Christian Bartnik chris10317h5@gmail.com
    Signed-off-by : Thomas Siedel <thomas.ff@spin-digital.com>

    • [DH] Changelog
    • [DH] configure
    • [DH] doc/encoders.texi
    • [DH] fftools/ffmpeg_mux_init.c
    • [DH] libavcodec/Makefile
    • [DH] libavcodec/allcodecs.c
    • [DH] libavcodec/libvvenc.c
    • [DH] libavcodec/version.h
  • ffmpeg audio output in iOS

    19 septembre 2015, par user3249421

    Good day,

    I have own project which using iFrameExtraktor (https://github.com/lajos/iFrameExtractor). I modified initWithVideo method to :

    -(id)initWithVideo:(NSString *)moviePath imgView: (UIImageView *)imgView {
    if (!(self=[super init])) return nil;

    AVCodec         *pCodec;
    AVCodec         *aCodec;

    // Register all formats and codecs
    avcodec_register_all();
    av_register_all();

    imageView = imgView;

    // Open video file
    if(avformat_open_input(&amp;pFormatCtx, [moviePath cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL) != 0) {
       av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
       goto initError;
    }

    // Retrieve stream information
    if(avformat_find_stream_info(pFormatCtx,NULL) &lt; 0) {
       av_log(NULL, AV_LOG_ERROR, "Couldn't find stream information\n");
       goto initError;
    }

    // Find the first video stream
    if ((videoStream =  av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;pCodec, 0)) &lt; 0) {
       av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
       goto initError;
    }

    if((audioStream = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, &amp;aCodec, 0)) &lt; 0 ){
       av_log(NULL, AV_LOG_ERROR, "Cannot find a audio stream in the input file\n");
       goto initError;
    }

    // Get a pointer to the codec context for the video stream
    pCodecCtx = pFormatCtx->streams[videoStream]->codec;
    aCodecCtx = pFormatCtx->streams[audioStream]->codec;

    // Find the decoder for the video stream
    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    if(pCodec == NULL) {
       av_log(NULL, AV_LOG_ERROR, "Unsupported video codec!\n");
       goto initError;
    }

    aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
    if(aCodec == NULL) {
       av_log(NULL, AV_LOG_ERROR, "Unsupported audio codec!\n");
       goto initError;
    }

    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, NULL) &lt; 0) {
       av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
       goto initError;
    }

    if(avcodec_open2(aCodecCtx, aCodec, NULL) &lt; 0){
       av_log(NULL, AV_LOG_ERROR, "Cannot open audio decoder\n");
       goto initError;
    }

    // Allocate video frame
    pFrame = av_frame_alloc();

    outputWidth = pCodecCtx->width;
    self.outputHeight = pCodecCtx->height;

    lastFrameTime = -1;
    [self seekTime:0.0];

    return self;

    initError:
       //[self release];
       return nil;
    }

    Video rendering works fine, but I don’t know how play audio to device output.

    Thanks for any tips.