Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (10600)

  • How do I independently fade in/out multiple (3+) overlay images over video using FFMPEG ?

    3 mars 2017, par blahblahber

    Using an arbitrary source video, I would like to INDEPENDENTLY fade in/fade out a minimum of three .png overlays/watermarks at various times throughout the video. I’m having trouble getting the syntax right for the filter chain.

    In these failed attempts below, I’m using four transparent .png images all at 1920x1080 using the same sized source input video. No scaling/positioning needed, just the overlays fading in and out at the defined times.

    I have the functionality working without fade, unsing ’enable’, like so :

    ffmpeg -i vid1.mp4 -loop 1 -i img1.png -i img2.png -i img3.png -i img4.png -filter_complex
    "overlay=0:0:enable='between(t,8,11)' [tmp];
    [tmp]overlay=0:0:enable='between(t,10,15)'[tmp1];
    [tmp1]overlay=0:0:enable='between(t,15,138)'[tmp2];
    [tmp2]overlay=0:0:enable='between(t,140,150)"
    -c:v libx264 -c:a copy
    -flags +global_header -shortest -s 1920x1080 -y out.mp4

    I just want the same control, using fade in/out.

    The following almost works as I’d like but I obviously don’t want the entire output stream to fade out. I realize that adding the fade=out to lines 7, 8 & 9 is fading the combined output (starting at line 7), but this is as close as I’ve come where I see each overlay image actually fading. The defined fades in lines 3, 4 and 5 apparently don’t affect anything, and that’s where I defined them originally. When I copied them to the output stream, the fade works on each overlay image, but again I don’t want it to affect the entire output stream, just the individual overlays.

    ffmpeg -i vid1.mp4 -loop 1 -i img1.png -i img2.png -i img3.png -i img4.png -filter_complex
       "[1:v]fade=out:st=3:d=1[watermark0];
       [2:v]fade=out:st=4:d=1[watermark1];
       [3:v]fade=out:st=5:d=1[watermark2];
       [4:v]fade=out:st=6:d=1[watermark3];
       [0:v][watermark0] overlay=0:0 [tmp0];
       [tmp0][watermark1] overlay=0:0,fade=out:st=4:d=1 [tmp1];
       [tmp1][watermark2] overlay=0:0,fade=out:st=6:d=1 [tmp2];
       [tmp2][watermark3] overlay=0:0,fade=out:st=8:d=1 [out]" -map "[out]" -c:v libx264 -c:a copy
       -flags +global_header -shortest -s 1920x1080 -y out.mp4

    I’ve also tried ’split’ with similar results to the above, but the fade only seems to work on the first image (this one uses fade in as well) :

    ffmpeg -i vid.mp4 -loop 1 -i img1.png -i img2.png -i img3.png -i img4.png -filter_complex
    "[1:v]split=4[wm1][wm2][wm3][wm4];
    [wm1]fade=in:st=1:d=1:alpha=1,fade=out:st=3:d=1:alpha=1[ovr1];
    [wm2]fade=in:st=2:d=1:alpha=1,fade=out:st=4:d=1:alpha=1[ovr2];
    [wm3]fade=in:st=3:d=1:alpha=1,fade=out:st=5:d=1:alpha=1[ovr3];
    [wm4]fade=in:st=4:d=1:alpha=1,fade=out:st=6:d=1:alpha=1[ovr4];
    [0:v][ovr1]overlay=0:0[base1];
    [base1][ovr2]overlay=0:0[base2];
    [base2][ovr3]overlay=0:0[base3];
    [base3][ovr4]overlay=0:0[out]" -map "[out]"
    -t 10 -c:v libx264 -c:a copy -flags +global_header -shortest -s 1920x1080 -y out.mp4

    Any help is greatly appreciated ! :)

  • Extract all audio from a mov file into a aac file [closed]

    5 février 2024, par Sushrut Kaul

    I have a video which contains several streams as shown below :

    


    Stream #0:0[0x1](eng): Video: prores (XQ) (ap4x / 0x78347061), yuv444p12le(bt709/smpte432/smpte2084, progressive), 3840x2160, 1222881 kb/s, SAR 1:1 DAR 16:9, 23.98 fps, 23.98 tbr, 24k tbn (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Video Media Handler
      vendor_id       : appl
      encoder         : Apple ProRes 4444 XQ
      timecode        : 00:59:59:00
    Side data:
      Content Light Level Metadata, MaxCLL=726, MaxFALL=93
  Stream #0:1[0x2](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (FL), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:2[0x3](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (FR), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:3[0x4](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:4[0x5](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (LFE), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:5[0x6](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (BL), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:6[0x7](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (BR), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:7[0x8](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (DL), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]
  Stream #0:8[0x9](eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, 1 channels (DR), s32 (24 bit), 1152 kb/s (default)
    Metadata:
      creation_time   : 2022-08-26T22:20:26.000000Z
      handler_name    : Apple Sound Media Handler
      vendor_id       : [0][0][0][0]


    


    As you can see, these audio streams are single channel as each stream is mapping to a unique channel.

    


    I want to combine these audio streams into a aac file such that the output has the channel information retained.

    


    I use the following command to do this :

    


    ffmpeg -i -filter _complex "[0:a:0][0:a:1][0:a:2][0:a:3][0:a:4][0:a:5][0:a:6][0:a:7]join=inputs=8:channel_layout=7.1[aout]" -map "[aout]" -c:a aac output_audio_join.aac

    


    The individual audio streams were having a duration of duration=103.728625 seconds.

    


    The output however is 862.207660 seconds. Why ?

    


  • Encoding raw YUV420P to h264 with AVCodec on iOS

    4 janvier 2013, par Wade

    I am trying to encode a single YUV420P image gathered from a CMSampleBuffer to an AVPacket so that I can send h264 video over the network with RTMP.

    The posted code example seems to work as avcodec_encode_video2 returns 0 (Success) however got_output is also 0 (AVPacket is empty).

    Does anyone have any experience with encoding video on iOS devices that might know what I am doing wrong ?

    - (void) captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection {

     // sampleBuffer now contains an individual frame of raw video frames
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     CVPixelBufferLockBaseAddress(pixelBuffer, 0);

     // access the data
     int width = CVPixelBufferGetWidth(pixelBuffer);
     int height = CVPixelBufferGetHeight(pixelBuffer);
     int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
     unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);


     // Convert the raw pixel base to h.264 format
     AVCodec *codec = 0;
     AVCodecContext *context = 0;
     AVFrame *frame = 0;
     AVPacket packet;

     //avcodec_init();
     avcodec_register_all();
     codec = avcodec_find_encoder(AV_CODEC_ID_H264);

     if (codec == 0) {
       NSLog(@"Codec not found!!");
       return;
     }

     context = avcodec_alloc_context3(codec);

     if (!context) {
       NSLog(@"Context no bueno.");
       return;
     }

     // Bit rate
     context->bit_rate = 400000; // HARD CODE
     context->bit_rate_tolerance = 10;
     // Resolution
     context->width = width;
     context->height = height;
     // Frames Per Second
     context->time_base = (AVRational) {1,25};
     context->gop_size = 1;
     //context->max_b_frames = 1;
     context->pix_fmt = PIX_FMT_YUV420P;

     // Open the codec
     if (avcodec_open2(context, codec, 0) < 0) {
       NSLog(@"Unable to open codec");
       return;
     }


     // Create the frame
     frame = avcodec_alloc_frame();
     if (!frame) {
       NSLog(@"Unable to alloc frame");
       return;
     }
     frame->format = context->pix_fmt;
     frame->width = context->width;
     frame->height = context->height;


     avpicture_fill((AVPicture *) frame, rawPixelBase, context->pix_fmt, frame->width, frame->height);

     int got_output = 0;
     av_init_packet(&packet);
     avcodec_encode_video2(context, &packet, frame, &got_output)

     // Unlock the pixel data
     CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
     // Send the data over the network
     [self uploadData:[NSData dataWithBytes:packet.data length:packet.size] toRTMP:self.rtmp_OutVideoStream];
    }

    Note : It is known that this code has memory leaks because I am not freeing the memory that is dynamically allocated.

    UPDATE

    I updated my code to use @pogorskiy method. I only try to upload the frame if got output returns 1 and clear the buffer once I am done encoding video frames.