Recherche avancée

Médias (91)

Autres articles (41)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (5602)

  • how to open a live stream like"p2p ://207.238.82.38:9916/51aea8370002dc6ce83e43c11c6234f6"

    6 mars 2017, par disco.liu

    I have a android player app which use ijkplayer as core engine.of course the native is supported by ffmepg.Now i want to support p2p protocol(like p2p ://207.238.82.38:9916/51aea8370002dc6ce83e43c11c6234f6 ) in my app.Who has a good idea ? ths

  • How to overlay 2 videos, one is main second one is overlaying it, and play sound simultaneously. using FFMPEG in ANDROID STUDIO

    6 août 2020, par Dusan Lilic

    as title say I'm trying to overlay 2 videos and play sound simultaneously. So far i managed to put 1 video over another using this command :

    


    String[] command = {"-i", mainVideoPath, "-vf",
            "movie=" + overlayVideo + ", scale=300:-1[inner]; [in][inner]overlay=10:10[out]" ,combinedVideoOutput};


    


    and this works but I have 3 problems here.
First, video is rotated by 90 degrees (overlay video), second Audio is played only from main video (I want to play sound from both videos simultaneously), and third overlay video is longer (for example : overlayVideo duration is 10 seconds and main video last 7 seconds) then mainVideo, so i want to final video last as long as mainVideo, as soon as mainVideo finish, overlayVideo should also stop (need to cut it prolly ?)

    


    String[] command = {"-i", mainVideoPath, "-i", overlayVideo ,
            "-filter_complex", "[1:v][0:v]scale2ref=(256/256)*ih/8/sar:ih/8[wm][base];[base][wm]overlay=10:10" ,combinedVideoOutput};


    


    Using this command i have 2 problems same as above except video is not rotated here.
I have to say that I'm not very familiar with ffmpeg commands. I was trying to figure it out from documentation link to documentation but without any success.
I know that I'm missing some filters like -map merge or something but can't figure it out.
Thanks in advance !

    


    EDIT1 :
This is logcat from second commad as asked

    


    D/LISKO: ffmpeg version n4.0-39-gda39990 Copyright (c) 2000-2018 the FFmpeg developers
      built with gcc 4.9.x (GCC) 20150123 (prerelease)
D/LISKO:   configuration: --target-os=linux --cross-prefix=/root/bravobit/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/root/bravobit/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-ffprobe --enable-libopus --enable-libvorbis --enable-libfdk-aac --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-libvpx --enable-libass --enable-yasm --enable-pthreads --disable-debug --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-linux-perf --disable-doc --disable-shared --enable-static --enable-runtime-cpudetect --enable-nonfree --enable-network --enable-avresample --enable-avformat --enable-avcodec --enable-indev=lavfi --enable-hwaccels --enable-ffmpeg --enable-zlib --enable-gpl --enable-small --enable-nonfree --pkg-config=pkg-config --pkg-config-flags=--static --prefix=/root/bravobit/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/root/bravobit/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/root/bravobit/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-cxxflags=
D/LISKO:   libavutil      56. 14.100 / 56. 14.100
      libavcodec     58. 18.100 / 58. 18.100
      libavformat    58. 12.100 / 58. 12.100
      libavdevice    58.  3.100 / 58.  3.100
      libavfilter     7. 16.100 /  7. 16.100
D/LISKO:   libavresample   4.  0.  0 /  4.  0.  0
      libswscale      5.  1.100 /  5.  1.100
D/LISKO:   libswresample   3.  1.100 /  3.  1.100
      libpostproc    55.  1.100 / 55.  1.100
D/LISKO: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/mainVideo.mp4':
      Metadata:
        major_brand     : iso6
        minor_version   : 1
        compatible_brands: mp42iso6avc1isom
        creation_time   : 2020-08-03T13:20:11.000000Z
      Duration: 00:00:07.04, start: 0.000000, bitrate: 1380 kb/s
        Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 140 kb/s (default)
        Metadata:
          creation_time   : 2020-07-28T08:11:36.000000Z
        Stream #0:1(und): Video: h264 (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720 [SAR 81:256 DAR 9:16], 1264 kb/s, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)
        Metadata:
          creation_time   : 2020-07-28T08:11:36.000000Z
D/LISKO: Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/overlayVideo.mp4':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: isommp42
        creation_time   : 2020-08-04T07:27:47.000000Z
        com.android.version: 10
      Duration: 00:00:11.19, start: 0.000000, bitrate: 9993 kb/s
D/LISKO:     Stream #1:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720, 9238 kb/s, SAR 1:1 DAR 16:9, 28.38 fps, 29.75 tbr, 90k tbn, 180k tbc (default)
        Metadata:
          rotate          : 270
          creation_time   : 2020-08-04T07:27:47.000000Z
          handler_name    : VideoHandle
        Side data:
          displaymatrix: rotation of 90.00 degrees
        Stream #1:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 192 kb/s (default)
        Metadata:
          creation_time   : 2020-08-04T07:27:47.000000Z
          handler_name    : SoundHandle
    Stream mapping:
      Stream #0:1 (h264) -> scale2ref:ref (graph 0)
      Stream #1:0 (h264) -> scale2ref:default (graph 0)
      overlay (graph 0) -> Stream #0:0 (libx264)
      Stream #0:0 -> #0:1 (aac (native) -> aac (native))
    Press [q] to stop, [?] for help
D/LISKO: [libx264 @ 0xee986100] using SAR=81/256
D/LISKO: [libx264 @ 0xee986100] using cpu capabilities: ARMv6 NEON
    [libx264 @ 0xee986100] profile High, level 3.1
D/LISKO: [libx264 @ 0xee986100] 264 - core 152 r2851M ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
D/LISKO: Output #0, mp4, to '/storage/emulated/0/outputVideo.mp4':
D/LISKO:   Metadata:
        major_brand     : iso6
D/LISKO:     minor_version   : 1
D/LISKO:     compatible_brands: mp42iso6avc1isom
D/LISKO:     encoder         : Lavf58.12.100
        Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 81:256 DAR 9:16], q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
D/LISKO:     Metadata:
          encoder         : Lavc58.18.100 libx264
        Side data:
          cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
D/LISKO:     Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
        Metadata:
          creation_time   : 2020-07-28T08:11:36.000000Z
D/LISKO:       encoder         : Lavc58.18.100 aac
D/LISKO: frame=   26 fps=0.0 q=0.0 size=       0kB time=00:00:00.09 bitrate=   4.1kbits/s dup=2 drop=0 speed=0.185x    
D/LISKO: frame=   41 fps= 41 q=0.0 size=       0kB time=00:00:00.58 bitrate=   0.7kbits/s dup=2 drop=0 speed=0.574x    
D/LISKO: frame=   49 fps= 32 q=0.0 size=       0kB time=00:00:00.92 bitrate=   0.4kbits/s dup=2 drop=0 speed=0.613x    
D/LISKO: frame=   59 fps= 29 q=29.0 size=       0kB time=00:00:01.97 bitrate=   0.2kbits/s dup=2 drop=0 speed=0.974x    
D/LISKO: frame=   75 fps= 29 q=29.0 size=       0kB time=00:00:01.97 bitrate=   0.2kbits/s dup=2 drop=0 speed=0.762x 


    


    EDIT2 :
After adding "-shortest" to command i managed to cut overlay video to be the same length as main video (because overlay video is always longer then mainVideo, "-shortest" take short one duration. So now, command looks like this :

    


        String[] command = {"-i", mainVideoPath, "-i", overlayVideo ,"-filter_complex", 
"[1:v][0:v]scale2ref=(256/256)*ih/8/sar:ih/8[wm][base];[base][wm]overlay=10:10", "-shortest", combinedVideoOutput};


    


    Rotation is good so only need to merge their audios. For now, only mainVideo audio is playing, overlay video audio isn't

    


    EDIT 3 :

    


       String[] command = {"-i", mainVideoPath, "-i", overlayVideo ,
            "-strict", "experimental",
            "-filter_complex",
            "[1:v][0:v]scale2ref=(256/256)*ih/8/sar:ih/8[wm][base];" +
                    "[base][wm]overlay=10:10; " +
                    "pan=stereo|c0=2*c0|c1=3*c0[a0];[1:a]pan=stereo|c0=1*c0|c1=4*c0[a1];[a0][a1]amix=inputs=2:duration=first:dropout_transition=2",
            "-shortest" ,combinedVideoOutput};


    


    With this command i managed to overlay videos, and play sound from both of them, rotation is good, but -shortest doesn't work now. Only existing problem now is to make them to last as shorter one (mainVideo is always shorter) ???

    


    EDIT 4 :

    


    This is finally working command

    


            String[] command = {"-i", mainVideoPath, "-i", overlayVideo,
            "-filter_complex",
            "[1:v][0:v]scale2ref=(256/256)*ih/8/sar:ih/8[wm][base];" +
                    "[base][wm]overlay=10:10:shortest=1;" +
                    "pan=stereo|c0=2*c0|c1=3*c0[a0];[1:a]pan=stereo|c0=1*c0|c1=4*c0[a1];" +
                    "[a0][a1]amix=inputs=2:duration=first:dropout_transition=2",
            combinedVideoOutput};


    


    Thanks

    


  • Error using FFMPEG to convert each input image into H264 compiling in Visual Studio running in MevisLab

    21 février 2014, par user3012914

    I am creating a ML Module in MevisLab Framework, I am using FFMPEG to convert each image i get into a H264 Video and save it after I get all the frames. But unfortunately I have problem allocating the output buffer size. The application crashes when I include this in my code.If I am not including it, the output file size is just 4kb. Nothing is stored in it.

    I am also not very sure whether it is correct way of getting the HBitmap into the Encoder. Would be great to have your suggestions.

    My Code :

    BITMAPINFO bitmapInfo;
               HDC        hdc;

               ZeroMemory(&bitmapInfo, sizeof(bitmapInfo));

               BITMAPINFOHEADER &bitmapInfoHeader = bitmapInfo.bmiHeader;
               bitmapInfoHeader.biSize            = sizeof(bitmapInfoHeader);
               bitmapInfoHeader.biWidth           = _imgWidth;
               bitmapInfoHeader.biHeight          = _imgHeight;
               bitmapInfoHeader.biPlanes          =  1;
               bitmapInfoHeader.biBitCount        = 24;
               bitmapInfoHeader.biCompression     = BI_RGB;
               bitmapInfoHeader.biSizeImage       = ((bitmapInfoHeader.biWidth * bitmapInfoHeader.biBitCount / 8 + 3) & 0xFFFFFFFC) * bitmapInfoHeader.biHeight;
               bitmapInfoHeader.biXPelsPerMeter   = 10000;
               bitmapInfoHeader.biYPelsPerMeter   = 10000;
               bitmapInfoHeader.biClrUsed         = 0;
               bitmapInfoHeader.biClrImportant    = 0;
               //RGBQUAD* Ref = new RGBQUAD[_imgWidth,_imgHeight];
               HDC hdcscreen = GetDC(0);

               hdc = CreateCompatibleDC(hdcscreen);
               ReleaseDC(0, hdcscreen);

               _hbitmap = CreateDIBSection(hdc, (BITMAPINFO*) &bitmapInfoHeader, DIB_RGB_COLORS, &_bits, NULL, NULL);

    To get the BitMap I use the above code. Then I allocate the Codec Context as followed

    c->bit_rate = 400000;
                   // resolution must be a multiple of two
                   c->width = 1920;
                   c->height = 1080;
                   // frames per second
                   frame_rate = _framesPerSecondFld->getIntValue();
                   //AVRational rational = {1,10};
                   //c->time_base = (AVRational){1,25};
                    //c->time_base = (AVRational){1,25};
                    c->gop_size = 10; // emit one intra frame every ten frames
                    c->max_b_frames = 1;
                    c->keyint_min = 1;   //minimum GOP size
                    c->time_base.num = 1;                                  // framerate numerator
                    c->time_base.den = _framesPerSecondFld->getIntValue();
                    c->i_quant_factor = (float)0.71;                        // qscale factor between P and I frames
                    c->pix_fmt = AV_PIX_FMT_RGB32;
                    std::string msg;
                    msg.append("Context is stored");
                    _messageFld->setStringValue(msg.c_str());

    I create the Bitmap Image as followed from the input

    PagedImage *inImg = getUpdatedInputImage(0);
           ML_CHECK(inImg);
           ImageVector imgExt = inImg->getImageExtent();
           if ((imgExt.x = _imgWidth) && (imgExt.y == _imgHeight))
           {
           if (((imgExt.x % 4)==0) && ((imgExt.y % 4) == 0))
           {
                    // read out input image and write output image into video
                   // get input image as an array
                   void* imgData = NULL;
                   SubImageBox imageBox(imgExt); // get the whole image
                   getTile(inImg, imageBox, MLuint8Type, &imgData);
                   iData = (MLuint8*)imgData;
                   int r = 0; int g = 0;int  b = 0;
                   // since we have only images with
                   // a z-ext of 1, we can compute the c stride as follows
                   int cStride = _imgWidth * _imgHeight;
                   uint8_t offset  = 0;
                   // pointer into the bitmap that is
                   // used to write images into the avi
                   UCHAR* dst = (UCHAR*)_bits;
                   for (int y = _imgHeight-1; y >= 0; y--)
                   { // reversely scan the image. if y-rows of DIB are set in normal order, no compression will be available.
                       offset = _imgWidth * y;
                       for (int x = 0; x < _imgWidth; x++)
                       {
                           if (_isGreyValueImage)
                           {
                               r = iData[offset + x];
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                           }
                           else
                           {
                               b = iData[offset + x]; // windows bitmap need reverse order: bgr instead of rgb
                               g = iData[offset + x + cStride          ];
                               r = iData[offset + x + cStride + cStride];

                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)g;
                               *dst++ = (UCHAR)b;
                           }
                           // alpha channel in input image is ignored
                       }
                   }

    Then I add it to the Encoder as followed as write as H264

    in_width   = c->width;
                    in_height  = c->height;
                    out_width  = c->width;
                    out_height = c->height;
                    ibytes = avpicture_get_size(PIX_FMT_BGR32, in_width, in_height);
                    obytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);
                    outbuf_size = 100000 + c->width*c->height*(32>>3);      // allocate output buffer
                    outbuf = static_cast(malloc(outbuf_size));

                    if(!obytes)
                    {
                        std::string msg;
                        msg.append("Bytes cannot be allocated");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Bytes allocation done");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    //create buffer for the output image
                    inbuffer  =  (uint8_t*)av_malloc(ibytes);
                    outbuffer =  (uint8_t*)av_malloc(obytes);
                    inbuffer  =  (uint8_t*)dst;

                    //create ffmpeg frame structures.  These do not allocate space for image data,
                    //just the pointers and other information about the image.
                    AVFrame* inpic = avcodec_alloc_frame();
                    AVFrame* outpic = avcodec_alloc_frame();

                    //this will set the pointers in the frame structures to the right points in
                    //the input and output buffers.
                    avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
                    avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
                    av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);
                    inpic->data[0] += inpic->linesize[0]*(_imgHeight-1);                                                      // flipping frame
                    inpic->linesize[0] = -inpic->linesize[0];    

                    if(!inpic)
                    {
                        std::string msg;
                        msg.append("Image is empty");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Picture has allocations");
                        _messageFld->setStringValue(msg.c_str());
                    }

                    //create the conversion context
                    fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
                    //perform the conversion
                    sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
                    //out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);
                    if(!out_size)
                    {
                        std::string msg;
                        msg.append("Outsize is not valid");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Outsize is valid");
                        _messageFld->setStringValue(msg.c_str());
                    }
                        fwrite(outbuf, 1, out_size, f);
                        if(!fwrite)
                    {
                        std::string msg;
                        msg.append("Frames couldnt be written");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Frames written to the file");
                        _messageFld->setStringValue(msg.c_str());
                    }
                       // for (;out_size; i++)
                       // {
                             out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
                             std::string msg;                      
                             msg.append("Writing Frames");
                             _messageFld->setStringValue(msg.c_str());// encode the delayed frames
                             _numFramesFld->setIntValue(_numFramesFld->getIntValue()+1);
                             fwrite(outbuf, 1, out_size, f);
                       // }
                        outbuf[0] = 0x00;
                        outbuf[1] = 0x00;                                                                                               // add sequence end code to have a real mpeg file
                        outbuf[2] = 0x01;
                        outbuf[3] = 0xb7;
                        fwrite(outbuf, 1, 4, f);
    }

    Then close and clean the Image Buffer and file

     ML_TRACE_IN("MovieCreator::_endRecording()")
    if (_numFramesFld->getIntValue() == 0)
    {
       _messageFld->setStringValue("Empty movie, nothing saved.");
    }
    else
    {
       _messageFld->setStringValue("Movie written to disk.");
       _numFramesFld->setIntValue(0);
    if (_hbitmap)
    {
       DeleteObject(_hbitmap);
    }
    if (c != NULL)
    {
          av_free(outbuffer);    
          av_free(inpic);
          av_free(outpic);
          fclose(f);
          avcodec_close(c);                                                                                               // freeing memory
          free(outbuf);
          av_free(c);
    }
    }

    }

    I think the Main Problem is over here !!

                        //out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);