Recherche avancée

Médias (91)

Autres articles (89)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (11923)

  • Remove Unwanted Rotation of video when merge audio With Video Using FFMPEG

    2 octobre 2015, par bhavesh kaila

    Actually I want to overwrite Audio Using Recorder Audio file into Recorded Video File.For That I am Using FFMPEG Library Project from this Link

    Issue :
    When I capture video in Portrait mode and then merge video with recorded audio merging is working fine but the problem is Video is rotate 90 Degree and then merging is work.I don’t Want to rotate video I want to do merge Only.

    if I capture video in Landscape Mode Then Merging audio is Working Fine.

    Below Currently FFMPEG Command is Used for merging Audio with Video File

    ffmpeg -y -i Video.mp4 -i Audio.mp4 -c:v copy -c:a copy -strict experimental -map 0:v:0 -map 1:a:0 OutputFile.mp4

    And I Have Tried Using Below Commands also but it’s Not working

    ffmpeg -y -i Video.mp4 -i Audio.mp4 -c:v copy -c:a copy -strict experimental -map 0:v:0 -map 1:a:0  -vf -metadata:s:v:0 rotate=0 OutputFile.mp4

    And

    ffmpeg -y -i Video.mp4 -i Audio.mp4 -c:v copy -c:a copy -strict experimental -map 0:v:0 -map 1:a:0  -vf -metadata:s:v:0 translate=1 OutputFile.mp4

    And I have Trying othe possibility also but its not work for me.

    Any Help Would be Acceptable.

    Thanks In Advance

    Log cat is displaying below

    WARNING: linker: /data/data/com.informer.favoraid/app_bin/ffmpeg has text relocations. This is wasting memory and prevents security hardening. Please fix.
    ffmpeg version 0.11.1
    built on Feb  7 2015 21:39:25 with gcc 4.6 20120106 (prerelease)
    configuration: --arch=arm --cpu=cortex-a8 --target-os=linux --enable-runtime-cpudetect --prefix=/data/data/info.guardianproject.ffmpeg/app_opt --enable-pic --disable-shared --enable-static --cross-prefix=/home/josh/android-ndk/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86_64/bin/arm-linux-androideabi- --sysroot=/home/josh/android-ndk/platforms/android-16/arch-arm --extra-cflags='-I../x264 -mfloat-abi=softfp -mfpu=neon -fPIE -pie' --extra-ldflags='-L../x264 -fPIE -pie' --enable-version3 --enable-gpl --disable-doc --enable-yasm --enable-decoders --enable-encoders --enable-muxers --enable-demuxers --enable-parsers --enable-protocols --enable-filters --enable-avresample --enable-libfreetype --disable-indevs --enable-indev=lavfi --disable-outdevs --enable-hwaccels --enable-ffmpeg --disable-ffplay --disable-ffprobe --disable-ffserver --disable-network --enable-libx264 --enable-zlib --enable-muxer=md5
    libavutil      51. 54.100 / 51. 54.100
    libavcodec     54. 23.100 / 54. 23.100
    libavformat    54.  6.100 / 54.  6.100
    libavdevice    54.  0.100 / 54.  0.100
    libavfilter     2. 77.100 /  2. 77.100
    libswscale      2.  1.100 /  2.  1.100
    libswresample   0. 15.100 /  0. 15.100
    libpostproc    52.  0.100 / 52.  0.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/Android/data/com.informer.favoraid/files/MP4_FAV20151002_105039_1099425693.mp4':



    10-01 17:15:20.056: I/System.out(6526): sxCon> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
       '/storage/emulated/0/Android/data/com.informer.favoraid/files/MP4_FAV20151001_171355_-1450037636.mp4':

     Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: isommp42
        creation_time   : 2015-10-01 11:44:06
      Duration: 00:00:04.80, start: 0.000000, bitrate: 15488 kb/s
        Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 16004 kb/s, 29.97 fps, 30 tbr, 90k tbn, 180k tbc
        Metadata:
          rotate          : 90
          creation_time   : 2015-10-01 11:44:06
          handler_name    : VideoHandle
        Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, s16, 128 kb/s
       Metadata:
          creation_time   : 2015-10-01 11:44:06
         handler_name    : SoundHandle
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/Android/data/com.informer.favoraid/Audio_Recording.mp4':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: isommp42
       creation_time   : 2015-10-01 11:45:13
     Duration: 00:00:05.24, start: 0.000000, bitrate: 18 kb/s
        Stream #1:0(eng): Audio: aac (mp4a / 0x6134706D), 8000 Hz, mono, s16, 12 kb/s
        Metadata:
          creation_time   : 2015-10-01 11:45:13
          handler_name    : SoundHandle
    Output #0, mp4, to '/storage/emulated/0/Android/data/com.informer.favoraid/OutputFile.mp4':
      Metadata:
      :     major_brand     : mp42
        minor_version   : 0
       compatible_brands: isommp42
       creation_time   : 2015-10-01 11:44:06
        encoder         : Lavf54.6.100
        Stream #0:0(eng): Video: h264 (![0][0][0] / 0x0021), yuv420p, 1920x1080, q=2-31, 16004 kb/s, 29.97 fps, 90k tbn, 90k tbc
        Metadata:
         rotate          : 90
        creation_time   : 2015-10-01 11:44:06
         handler_name    : VideoHandle
        Stream #0:1(eng): Audio: aac (@[0][0][0] / 0x0040), 8000 Hz, mono, 12 kb/s
        Metadata:
          creation_time   : 2015-10-01 11:45:13
          handler_name    : SoundHandle
    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
      Stream #1:0 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    frame=  138 fps=0.0 q=-1.0 Lsize=    9008kB time=00:00:04.57 bitrate=16141.8kbits/s    
  • How to stop os.system() in Python ?

    30 septembre 2015, par 吴雨羲

    I want to stop the cmd command after 12 seconds. How to stop it ? My program doesn’t work.

    import multiprocessing
    import os
    import time


    def process():
       os.system('ffmpeg -i rtsp://218.204.223.237:554/live/1/66251FC11353191F/e7ooqwcfbqjoo80j.sdp -c copy dump.mp4')


    def stop():
       time.sleep(12)


    if __name__ == '__main__':
       p = multiprocessing.Process(target=process, args=())
       s = multiprocessing.Process(target=stop, args=())
       p.start()
       s.start()
       s.join()
       p.terminate()

    I change my program follow Pedro’s suggusetion @Pedro Lobito ,but it still doesn`t work.

    import shlex

    import subprocess
    import time

    command_line = 'ffmpeg -i rtsp://218.204.223.237:554/live/1/66251FC11353191F/e7ooqwcfbqjoo80j.sdp -c copy dump.mp4'

    proc = subprocess.Popen(shlex.split(command_line), shell=True)
    print '1' * 50
    time.sleep(2)  # <-- sleep for 12''
    print '2' * 50
    proc.terminate()  # <-- terminate the process
    print '3' * 50

    And the result in CMD is

    D:\wyx\workspace\python\ffrstp>python test1.py
    11111111111111111111111111111111111111111111111111
    ffmpeg version N-75563-g235381e Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.9.3 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
    le-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --
    enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-l
    ibilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enab
    le-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --en
    able-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --ena
    ble-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc
    --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enabl
    e-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --ena
    ble-decklink --enable-zlib
     libavutil      55.  2.100 / 55.  2.100
     libavcodec     57.  3.100 / 57.  3.100
     libavformat    57.  2.100 / 57.  2.100
     libavdevice    57.  0.100 / 57.  0.100
     libavfilter     6.  8.100 /  6.  8.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.100 /  2.  0.100
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, rtsp, from 'rtsp://218.204.223.237:554/live/1/66251FC11353191F/e7ooqwc
    fbqjoo80j.sdp':
     Metadata:
       title           : RTSP Session
       comment         : Jabsco Stream(JCO-jy9757acx1eve7nm-a104aea23c1e17bbc776656
    f5069bbf7)
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: mpeg4 (Simple Profile), yuv420p, 352x288 [SAR 1:1 DAR 11
    :9], 10k tbr, 90k tbn, 10k tbc
    [mp4 @ 00bad520] Codec for stream 0 does not use global headers but container fo
    rmat requires global headers
    Output #0, mp4, to 'dump.mp4':
     Metadata:
       title           : RTSP Session
       comment         : Jabsco Stream(JCO-jy9757acx1eve7nm-a104aea23c1e17bbc776656
    f5069bbf7)
       encoder         : Lavf57.2.100
       Stream #0:0: Video: mpeg4 ( [0][0][0] / 0x0020), yuv420p, 352x288 [SAR 1:1 D
    AR 11:9], q=2-31, 10k tbr, 90k tbn, 90k tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    [mp4 @ 00bad520] pts has no value
    [mp4 @ 00bad520] Non-monotonous DTS in output stream 0:0; previous: 0, current:
    0; changing to 1. This may result in incorrect timestamps in the output file.
    [mp4 @ 00bad520] Non-monotonous DTS in output stream 0:0; previous: 1, current:
    0; changing to 2. This may result in incorrect timestamps in the output file.
    frame=   30 fps=0.0 q=-1.0 size=      63kB time=00:00:02.33 bitrate= 220.4kbits/
    frame=   36 fps= 31 q=-1.0 size=      68kB time=00:00:02.95 bitrate= 187.9kbits/
    frame=   42 fps= 24 q=-1.0 size=      73kB time=00:00:03.52 bitrate= 169.6kbits/
    frame=   47 fps= 20 q=-1.0 size=      90kB time=00:00:04.10 bitrate= 178.9kbits/
    frame=   53 fps= 19 q=-1.0 size=      95kB time=00:00:04.63 bitrate= 167.2kbits/
    22222222222222222222222222222222222222222222222222
    33333333333333333333333333333333333333333333333333

    D:\wyx\workspace\python\ffrstp>frame=   58 fps= 17 q=-1.0 size=      99kB time=0
    frame=   64 fps= 16 q=-1.0 size=     104kB time=00:00:05.72 bitrate= 149.0kbits/
    frame=   70 fps= 15 q=-1.0 size=     122kB time=00:00:06.36 bitrate= 156.7kbits/
    frame=   76 fps= 15 q=-1.0 size=     127kB time=00:00:06.92 bitrate= 150.2kbits/
    frame=   82 fps= 14 q=-1.0 size=     132kB time=00:00:07.55 bitrate= 143.2kbits/
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 7 packets
    frame=   86 fps= 13 q=-1.0 size=     135kB time=00:00:07.95 bitrate= 139.5kbits/
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 3 packets

    Maybe ffmpeg can reconnection. Can I stop it like ’Ctrl + C’ ?
    When I press ’Ctrl+C’,the result is

    22222222222222222222222222222222222222222222222222
    33333333333333333333333333333333333333333333333333

    D:\wyx\workspace\python\ffrstp>frame=   58 fps= 17 q=-1.0 size=      99kB time=0
    frame=   64 fps= 16 q=-1.0 size=     104kB time=00:00:05.72 bitrate= 149.0kbits/
    frame=   70 fps= 15 q=-1.0 size=     122kB time=00:00:06.36 bitrate= 156.7kbits/
    frame=   76 fps= 15 q=-1.0 size=     127kB time=00:00:06.92 bitrate= 150.2kbits/
    frame=   82 fps= 14 q=-1.0 size=     132kB time=00:00:07.55 bitrate= 143.2kbits/
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 7 packets
    frame=   86 fps= 13 q=-1.0 size=     135kB time=00:00:07.95 bitrate= 139.5kbits/
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 3 packets
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 1 packets
    frame=   89 fps= 13 q=-1.0 size=     138kB time=00:00:08.35 bitrate= 135.3kbits/
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 1 packets
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 3 packets
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 5 packets
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 3 packets
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 1 packets
    frame=   92 fps= 12 q=-1.0 size=     144kB time=00:00:09.15 bitrate= 128.7kbits/
    frame=   93 fps= 11 q=-1.0 size=     145kB time=00:00:09.58 bitrate= 123.8kbits/
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 1 packets
    [rtsp @ 00adb3e0] max delay reached. need to consume packet
    [NULL @ 00add8c0] RTP: missed 2 packets


    D:\wyx\workspace\python\ffrstp>[rtsp @ 00adb3e0] max delay reached. need to cons
    ume packet
    [NULL @ 00add8c0] RTP: missed 12 packets
    frame=   96 fps= 11 q=-1.0 size=     148kB time=00:00:10.43 bitrate= 116.2kbits/
    frame=   96 fps=9.2 q=-1.0 Lsize=     151kB time=00:00:10.43 bitrate= 118.3kbits
    /s
    video:148kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing o
    verhead: 1.913398%
    Exiting normally, received signal 2.
  • How to use av_free, av_frame_free, and av_free_packet to avoid memory leak in ffmpeg ?

    22 septembre 2015, par Liu Yiteng

    I have written code to decode and show video frames with ffmpeg and opencv.The video was successfully played but, I noticed usage of memory is too high(>500MB).

    I believe I have called av_free and av_frame_free after decoding every frame.
    Can anybody help me and point out what’s wrong with it ?

    code :

    main.cpp:

           #include
           #include <iostream>
           #include <string>
           #include "AvCodecContainer.h"
           #include <thread>
           #include <chrono>
           #define _CRTDBG_MAP_ALLOC
           #include
           using namespace cv;
           using namespace std;
           int main()
           {
               Mat* img;
               AvCodecContainer* avc = AvCodecContainer::getInstance();
               const char* file = "C:\\Users\\Shinelon\\Documents\\Visual Studio 2015\\Projects\\opencv + ffmpegtest\\Debug\\test.mp4";
               avc->preparefile("D:\\test.mp4");
               int framenum = 0;
               for (;;)
               {
                   int res;
                   res = avc->tryNextFrame();
                   if (res == 0)
                   {
                       decode_frame frame = avc->getCurrentFrame();
                       if (frame.data == nullptr)
                           continue;
                       cv::Mat* pMat = new cv::Mat();
                       pMat->create(cv::Size(frame.width, frame.height), CV_8UC3);
                       memcpy(pMat->data, frame.data, frame.width * frame.height * 3);
                       if (pMat->empty())
                       {
                           cout &lt;&lt; "error";
                           return -1;
                       }
                       imshow("cvVideo", *pMat);
                       framenum++;
                       printf("Frame %d, time = %f s\n", framenum, framenum * avc->interval);
                       cv::waitKey((int)(avc->interval*1000));
                       pMat->release();

                   }
                   else
                   {
                       printf("end");
                       _CrtDumpMemoryLeaks();
                       waitKey(100000);
                       return 0;
                   }
               }
               return 0;
           }
    </chrono></thread></string></iostream>
    AvCodecContainer.cpp
    #include "AvCodecContainer.h"
    bool AvCodecContainer::mAV_Registered = false;
    AvCodecContainer* AvCodecContainer::mInstance = NULL;
    AvCodecContainer* AvCodecContainer::getInstance()
    {
     if (!mAV_Registered)
     {
         av_register_all();
         mAV_Registered = true;
     }
     if (mInstance == nullptr)
         mInstance = new AvCodecContainer();
     return mInstance;
    }
    int AvCodecContainer::preparefile(char* filename)
    {

     // 打开视频文件
     mAV_Prepare_success = false;
     int result;
     result = avformat_open_input(&amp;pFormatCtx, filename, NULL, NULL);
     if (result &lt; 0)
     {
         char s[AV_ERROR_MAX_STRING_SIZE];
         av_make_error_string(s, AV_ERROR_MAX_STRING_SIZE, result);
         printf("%s", s);
         return result;
     }

     // 取出流信息
     result = avformat_find_stream_info(pFormatCtx, NULL);
     if (result &lt; 0)
     {
         char s[AV_ERROR_MAX_STRING_SIZE];
         av_make_error_string(s, AV_ERROR_MAX_STRING_SIZE, result);
         printf("%s", s);
         return result;
     }
     //handle_error(); // 不能够找到流信息

     //遍历文件的各个流,找到第一个视频流,并记录该流的编码信息
     int videoindex = -1;
     for (int i = 0; inb_streams; i++)
     {
         if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
         {
             double fps = av_q2d(pFormatCtx->streams[i]->r_frame_rate);
             interval = 1.0 / fps;
             videoindex = i;
             break;
         }
     }
     if (videoindex == -1)
     {
         printf("Didn't find a video stream.\n");
         return videoindex;
     }
     this->videoIndex = videoindex;
     pCodecCtx = pFormatCtx->streams[videoindex]->codec;


     pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
     if (pCodec == NULL)
         return -9999;
     //  handle_error(); // 找不到解码器

     result = avcodec_open2(pCodecCtx, pCodec, NULL); //打开解码器
     if (result >= 0)
     {
         mAV_Prepare_success = true;
         //输出一下信息-----------------------------
         printf("文件信息-----------------------------------------\n");
         av_dump_format(pFormatCtx, 0, filename, 0);
         //av_dump_format只是个调试函数,输出文件的音、视频流的基本信息了,帧率、分辨率、音频采样等等
         printf("-------------------------------------------------\n");
     }
     return result;
    }
    int AvCodecContainer::reset()
    {
     return 0;
    }
    decode_frame AvCodecContainer::getCurrentFrame()
    {
     int ret;
     int got_picture;
     //通过下面的api进行解码一帧数据,将有效的图像数据存储到pAvFrame成员变量中
     if (packet->stream_index == videoIndex)
     {

         pAvFrame = av_frame_alloc();
         ret = avcodec_decode_video2(pCodecCtx, pAvFrame, &amp;got_picture, packet);
         //将YUV420p颜色编码转换成BGR颜色编码
         if (!got_picture)
         {
             av_free_packet(packet);
             av_frame_unref(pAvFrame);
             av_frame_free(&amp;pAvFrame);
             packet = nullptr;
             pAvFrame = nullptr;
             return decode_frame(nullptr, -1, -1);
         }
         else if (colorContext == nullptr)
             colorContext = sws_getContext(pCodecCtx->width, pCodecCtx->height,
                 pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
                 AV_PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL);

         //再得到为BGR格式帧分配内存

         AVFrame *pFrameRGB = NULL;
         uint8_t  *out_bufferRGB = NULL;
         pFrameRGB = av_frame_alloc();
         //给pFrameRGB帧加上分配的内存;
         int size = avpicture_get_size(AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height);
         out_bufferRGB = new uint8_t[size];
         avpicture_fill((AVPicture *)pFrameRGB, out_bufferRGB, AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height);
         //最后进行转换
         sws_scale(colorContext, pAvFrame->data, pAvFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
         av_frame_unref(pAvFrame);
         av_frame_unref(pFrameRGB);
         av_frame_free(&amp;pFrameRGB);
         av_frame_free(&amp;pAvFrame);
         av_free_packet(packet);
         pFrameRGB = nullptr;
         pAvFrame = nullptr;
         packet = nullptr;
         return decode_frame(out_bufferRGB, pCodecCtx->width, pCodecCtx->height);
     }
    }
    bool AvCodecContainer::tryNextFrame()
    {
     //分配一个帧指针,指向解码后的原始帧
     y_size = pCodecCtx->width * pCodecCtx->height;
     //分配帧内存
     packet = (AVPacket *)av_malloc(sizeof(AVPacket));
     av_new_packet(packet, y_size);
     int ret = av_read_frame(pFormatCtx, packet);
     if (ret != 0)
     {
         av_free_packet(packet);
         packet = nullptr;
     }
     return ret;
    }
    AvCodecContainer::AvCodecContainer()
    {

    }
    AvCodecContainer::~AvCodecContainer()
    {

    }