Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (87)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (8367)

  • FFmpeg check channels of a 7.1 audio for silence

    24 septembre 2019, par Tina J

    This is a follow-up question from my previous question asked here, where I needed to look for silence within a specific audio track. Here is the ffmpeg life-saver solution where helps to get some metadata :

    ffmpeg -i file -map 0:a:1 -af astats -f null -

    But I have other type of input .mp4 files where they have one single track of 8 (i.e. 7.1) audio channels. Apparently these files are transcoded from an original file (somehow the 4 track stereos are squished into these files). Now similar to my previous, I need to know if the original file was 2-channel stereo or 5.1 (6) channel.

    How to know if a specific channel of an audio track (say Center channel) is silent/mute, possibly using ffmpeg ? Here is a sample .mp4 file.

  • ios Crash when convert yuvj420p to CVPixelBufferRef use ffmpeg

    20 mars 2020, par jansma

    I need to get rtsp steam from ip camera and convert the AVFrame data to CVPixelBufferRef
    in order to send the data to other sdk

    First I use avcodec_decode_video2 to decode video data

    After decode the video I convert the data to CVPixelBufferRef this is my code

    size_t srcPlaneSize = pVideoFrame_->linesize[1]*pVideoFrame_->height;
    size_t dstPlaneSize = srcPlaneSize *2;
    uint8_t *dstPlane = malloc(dstPlaneSize);
    void *planeBaseAddress[2] = { pVideoFrame_->data[0], dstPlane };

    // This loop is very naive and assumes that the line sizes are the same.
    // It also copies padding bytes.
    assert(pVideoFrame_->linesize[1] == pVideoFrame_->linesize[2]);
    for(size_t i = 0; i/ These might be the wrong way round.
       dstPlane[2*i  ]=pVideoFrame_->data[2][i];
       dstPlane[2*i+1]=pVideoFrame_->data[1][i];
    }

    // This assumes the width and height are even (it's 420 after all).
    assert(!pVideoFrame_->width%2 && !pVideoFrame_->height%2);
    size_t planeWidth[2] = {pVideoFrame_->width, pVideoFrame_->width/2};
    size_t planeHeight[2] = {pVideoFrame_->height, pVideoFrame_->height/2};
    // I'm not sure where you'd get this.
    size_t planeBytesPerRow[2] = {pVideoFrame_->linesize[0], pVideoFrame_->linesize[1]*2};
    int ret = CVPixelBufferCreateWithPlanarBytes(
           NULL,
           pVideoFrame_->width,
           pVideoFrame_->height,
           kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
           NULL,
           0,
           2,
           planeBaseAddress,
           planeWidth,
           planeHeight,
           planeBytesPerRow,
           NULL,
           NULL,
           NULL,
           &pixelBuf);

    After I run the app the application will crash on

    dstPlane[2*i  ]=pVideoFrame_->data[2][i];

    How to resove this question ?

    this is console in xcode

    All info found
    Setting avg frame rate based on r frame rate
    stream 0: start_time: 0.080 duration: -102481911520608.625
    format: start_time: 0.080 duration: -9223372036854.775 bitrate=0 kb/s
    nal_unit_type: 0, nal_ref_idc: 0
    nal_unit_type: 7, nal_ref_idc: 3
    nal_unit_type: 0, nal_ref_idc: 0
    nal_unit_type: 8, nal_ref_idc: 3
    Ignoring NAL type 0 in extradata
    Ignoring NAL type 0 in extradata
    nal_unit_type: 7, nal_ref_idc: 3
    nal_unit_type: 8, nal_ref_idc: 3
    nal_unit_type: 6, nal_ref_idc: 0
    nal_unit_type: 5, nal_ref_idc: 3
    unknown SEI type 229
    Reinit context to 800x608, pix_fmt: yuvj420p
    (lldb)

    enter image description here

  • How to stitch(concat) two transport stream with two different resolution and I-frame slices format without loosing resolution and slices information

    2 octobre 2019, par AnkurTank

    I have been trying to test a use case with steam captured from multimedia device and that didn’t work. And then I have been trying to create this specific transport stream for like two days now without success, so requesting some help.

    I need to create transport stream with two different resolution and two different slicing format.

    I divided the task in following steps and in last two steps I need help.

    Step 1 : Download sample video with resolution : 1920x1080.
    I downloaded big buck bunny mp4 .

    Step 2 : Create transport stream with following
    resolution : 1920x720, H264 I frame slices per frame : 1
    I used following ffmpeg commands to do that.

    #Rename file to input.mp4
    $ mv bbb_sunflower_1080p_30fps_normal.mp4 input.mp4
    #Extract transport stream
    $ ffmpeg -i input.mp4 -c copy first.ts

    first.ts is having 1980x720 resolution and one H264 I slice per frame.

    Step 3 : Create another transport stream with smaller resolution using following commands

    #Get mp4 with lower resolution.
    $ ffmpeg -i input.mp4 -s 640x480 temp.mp4
    #Extract trans port stream from mp4
    $ ffmpeg -i temp.mp4 -c copy low_r.ts

    Step 4 : Edit(and re-encode ?) low_r.ts to have two H264 I frame slices.
    I used following command to achieve it.

    $ x264 --slices 4 low_r.ts -o second.ts

    However when I play this second.ts on vlc using following command it doesn’t play

    $ vlc ./second.ts

    And using Elacard StreamEye software when I analyze the transport stream I see that it has 4 H264 I slices in only two times other than that lot of H264 p slices and H264 B slices.
    Need help here to figure out why second.ts doesn’t play and why slicing is not correct.

    Step 5 : Combine both the transport stream without loosing resolution and slicing information.
    Don’t know command for this. Need help here.
    I tried ffmpeg but that combines two stream with different resolution and makes one file with one resolution.

    Any suggestions/pointers would help me proceed. Let me also know if any of the above steps are not fine too.