Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (48)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (5634)

  • Having trouble compiling ffmpeg code in command terminal

    10 septembre 2019, par m00ncake

    I am having a bit of trouble compiling my c++ code in my terminal. I have ffmpeg installed as shown below.

    ffmpeg version n4.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
     configuration: --enable-gpl --enable-version3 --disable-static --enable-shared --enable-small --enable-avisynth --enable-chromaprint --enable-frei0r --enable-gmp --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-librtmp --enable-libshine --enable-libsmbclient --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtesseract --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libxml2 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-libmysofa --enable-openal --enable-opencl --enable-opengl --enable-libdrm
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

    However, when I compile my c++ code, currently trying to get NTP timestamps from private data in ffmpeg but I need to include their headers ? I have looked into ffmpeg’s libavformat folder and it does have rtpdec.h but when I compile it in the command line, i get this error. (trying to include this header for RTSPState and RTSPStream as well as RTPDemuxContext)

    cf.cpp:11:10: fatal error: libavformat/rtsp.h: No such file or directory
    #include <libavformat></libavformat>rtpdec.h>
             ^~~~~~~~~~~~~~~~~~~~
    compilation terminated.

    This is my code :

    #include
    #include
    #include <iostream>
    #include <fstream>
    #include <sstream>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavformat></libavformat>avio.h>
    #include <libavformat></libavformat>rtpdec.h>
    #include <libswscale></libswscale>swscale.h>
    }

    int main(int argc, char** argv) {

       // Open the initial context variables that are needed
       SwsContext *img_convert_ctx;
       AVFormatContext* format_ctx = avformat_alloc_context();
       AVCodecContext* codec_ctx = NULL;
       int video_stream_index;
       uint32_t* last_rtcp_ts;
       double* base_time;
       double* time;

       // Register everything
       av_register_all();
       avformat_network_init();

       //open RTSP
       if (avformat_open_input(&amp;format_ctx, "rtsp://admin:password@192.168.1.67:554",
               NULL, NULL) != 0) {
           return EXIT_FAILURE;
       }

       if (avformat_find_stream_info(format_ctx, NULL) &lt; 0) {
           return EXIT_FAILURE;
       }

       //search video stream
       for (int i = 0; i &lt; format_ctx->nb_streams; i++) {
           if (format_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
               video_stream_index = i;
       }

       AVPacket packet;
       av_init_packet(&amp;packet);

       //open output file
       AVFormatContext* output_ctx = avformat_alloc_context();

       AVStream* stream = NULL;
       int cnt = 0;

       //start reading packets from stream and write them to file
       av_read_play(format_ctx);    //play RTSP

       // Get the codec
       AVCodec *codec = NULL;
       codec = avcodec_find_decoder(AV_CODEC_ID_H264);
       if (!codec) {
           exit(1);
       }

    // Add this to allocate the context by codec
    codec_ctx = avcodec_alloc_context3(codec);

    avcodec_get_context_defaults3(codec_ctx, codec);
    avcodec_copy_context(codec_ctx, format_ctx->streams[video_stream_index]->codec);
    std::ofstream output_file;

    if (avcodec_open2(codec_ctx, codec, NULL) &lt; 0)
       exit(1);

       img_convert_ctx = sws_getContext(codec_ctx->width, codec_ctx->height,
               codec_ctx->pix_fmt, codec_ctx->width, codec_ctx->height, AV_PIX_FMT_RGB24,
               SWS_BICUBIC, NULL, NULL, NULL);

       int size = avpicture_get_size(AV_PIX_FMT_YUV420P, codec_ctx->width,
               codec_ctx->height);
       uint8_t* picture_buffer = (uint8_t*) (av_malloc(size));
       AVFrame* picture = av_frame_alloc();
       AVFrame* picture_rgb = av_frame_alloc();
       int size2 = avpicture_get_size(AV_PIX_FMT_RGB24, codec_ctx->width,
               codec_ctx->height);
       uint8_t* picture_buffer_2 = (uint8_t*) (av_malloc(size2));
       avpicture_fill((AVPicture *) picture, picture_buffer, AV_PIX_FMT_YUV420P,
               codec_ctx->width, codec_ctx->height);
       avpicture_fill((AVPicture *) picture_rgb, picture_buffer_2, AV_PIX_FMT_RGB24,
               codec_ctx->width, codec_ctx->height);

       while (av_read_frame(format_ctx, &amp;packet) >= 0 &amp;&amp; cnt &lt; 1000) { //read ~ 1000 frames

           RTSPState* rt = format_ctx->priv_data;
           RTSPStream *rtsp_stream = rt->rtsp_streams[0];
           RTPDemuxContext* rtp_demux_context = rtsp_stream->transport_priv;
           uint32_t new_rtcp_ts = rtp_demux_context->last_rtcp_timestamp;
           uint64_t last_ntp_time = 0;

           if (new_rtcp_ts != *last_rtcp_ts) {
               *last_rtcp_ts = new_rtcp_ts;
               last_ntp_time = rtp_demux_context->last_rtcp_ntp_time;
               uint32_t seconds = ((last_ntp_time >> 32) &amp; 0xffffffff) - 2208988800;
               uint32_t fraction  = (last_ntp_time &amp; 0xffffffff);
               double useconds = ((double) fraction / 0xffffffff);
               *base_time = seconds + useconds;
               uint32_t d_ts = rtp_demux_context->timestamp - *last_rtcp_ts;
               *time = *base_time + d_ts / 90000.0;
               std::cout &lt;&lt; "Time is: " &lt;&lt; *time &lt;&lt; std::endl;
           }

           std::cout &lt;&lt; "1 Frame: " &lt;&lt; cnt &lt;&lt; std::endl;
           if (packet.stream_index == video_stream_index) {    //packet is video
               std::cout &lt;&lt; "2 Is Video" &lt;&lt; std::endl;
               if (stream == NULL) {    //create stream in file
                   std::cout &lt;&lt; "3 create stream" &lt;&lt; std::endl;
                   stream = avformat_new_stream(output_ctx,
                           format_ctx->streams[video_stream_index]->codec->codec);
                   avcodec_copy_context(stream->codec,
                           format_ctx->streams[video_stream_index]->codec);
                   stream->sample_aspect_ratio =
                           format_ctx->streams[video_stream_index]->codec->sample_aspect_ratio;
               }
               int check = 0;
               packet.stream_index = stream->id;
               std::cout &lt;&lt; "4 decoding" &lt;&lt; std::endl;
               int result = avcodec_decode_video2(codec_ctx, picture, &amp;check, &amp;packet);
               std::cout &lt;&lt; "Bytes decoded " &lt;&lt; result &lt;&lt; " check " &lt;&lt; check
                       &lt;&lt; std::endl;
               if (cnt > 100)    //cnt &lt; 0)
                       {
                   sws_scale(img_convert_ctx, picture->data, picture->linesize, 0,
                           codec_ctx->height, picture_rgb->data, picture_rgb->linesize);
                   std::stringstream file_name;
                   file_name &lt;&lt; "test" &lt;&lt; cnt &lt;&lt; ".ppm";
                   output_file.open(file_name.str().c_str());
                   output_file &lt;&lt; "P3 " &lt;&lt; codec_ctx->width &lt;&lt; " " &lt;&lt; codec_ctx->height
                           &lt;&lt; " 255\n";
                   for (int y = 0; y &lt; codec_ctx->height; y++) {
                       for (int x = 0; x &lt; codec_ctx->width * 3; x++)
                           output_file
                                   &lt;&lt; (int) (picture_rgb->data[0]
                                           + y * picture_rgb->linesize[0])[x] &lt;&lt; " ";
                   }
                   output_file.close();
               }
               cnt++;
           }
           av_free_packet(&amp;packet);
           av_init_packet(&amp;packet);
       }
       av_free(picture);
       av_free(picture_rgb);
       av_free(picture_buffer);
       av_free(picture_buffer_2);

       av_read_pause(format_ctx);
       avio_close(output_ctx->pb);
       avformat_free_context(output_ctx);

       return (EXIT_SUCCESS);
    }
    </sstream></fstream></iostream>

    The command i use to compile my code : g++ -w cf.cpp -o cf $(pkg-config --cflags --libs libavformat libswscale libavcodec)

    I am a bit new to coding in FFmpeg.

  • Unable to find a suitable output format for ' -filter_complex' -filter_complex : Invalid argument for FFMPEG Terminal

    18 août 2023, par heroZero

    This is what I am trying :

    &#xA;

    ffmpeg -i crop-1.mp4 -i crop-2.mp4 -i crop-2.mp4 -i crop-3.mp4 \ -filter_complex \ "[0:v][1:v]hstack=inputs=2[top] ;\ [2:v][3:v]hstack=inputs=2[bottom] ;[top][bottom]vstack=inputs=2[v]" \ -map "[v]" \ output.mp4

    &#xA;

  • Video created via shell_exec() does not play, but video created via terminal does

    25 février 2021, par Mukul Kashyap

    I am creating a video from single audio and single image and it is fine when the audio length less than 10 seconds but when the audio length exceeds 10 seconds then video not playing. I am using FFmpeg to create video using shell_exec().&#xA;The video is fine when I directly runs the FFmpeg command on the terminal but the only issue comes with the shell_exec command.

    &#xA;

    This command I am using -

    &#xA;

    ffmpeg -loop 1 -f image2 -i $this->img_url -i  $this->audio_url -vf scale=1920*1080 -pix_fmt yuv420p -vcodec libx264 -shortest ".$video_local_dir.$video_name;&#xA;

    &#xA;