Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (36)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6363)

  • When using libva* (ffmpeg) encoded GIF images, an error is reported when compiling the demo

    10 août 2023, par yangjinhui2936

    issuse : I need to use the GIF encoding feature in FFMPEG to encode argb images as gifs.Because the encoding effect of using GIF library is not as good as the effect of FFMPEG.
However, several libraries like avcodec were too bulky, so I did some cropping.I just want to keep the functionality of GIF encoding.
Below is my makefile for cropping ffmpeg :

    


    #!/bin/sh
# ./configure --prefix=$(pwd)/output --arch=arm --target-os=linux --enable-cross-compile --disable-asm --cross-prefix=arm-linux-gnueabihf- 
./configure --prefix=$(pwd)/output --target-os=linux --disable-asm \
--disable-gpl --enable-nonfree --enable-error-resilience --enable-debug --enable-shared --enable-small --enable-zlib \
--disable-ffmpeg --disable-ffprobe --disable-ffplay --disable-programs --disable-symver\
 --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-decoder=h264 --enable-avformat \
 --disable-txtpages --enable-avcodec --enable-avutil \
 --disable-avresample --disable-avfilter --disable-avdevice --disable-postproc \
 --disable-swscale --enable-decoder=gif --enable-demuxer=gif --enable-muxer=gif --disable-iconv \
 --disable-v4l2-m2m --disable-indevs --disable-outdevs

make clean
make -j8

make install


    


    Then link the compiled so to the gif demo.
Blew is gif demo code(He was automatically generated by chatgpt, and I want to verify it) :

    


    #include 
// #include "output/include/imgutils.h"
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavutil/avutil.h"

int main() {
    AVCodec *enc_codec;
    AVCodecContext *enc_ctx = NULL;
    AVStream *stream = NULL;
    int ret;

    AVFormatContext *fmt_ctx = avformat_alloc_context();
    if (!fmt_ctx) {
        fprintf(stderr, "Could not allocate format context\n");
        return 1;
    }
    
    AVInputFormat *input_fmt = av_find_input_format("image2");
    if ((ret = avformat_open_input(&fmt_ctx, "input.bmp", input_fmt, NULL)) < 0) {
        fprintf(stderr, "Could not open input file %d\n", ret);
        return ret;
    }

    AVCodec *dec_codec = avcodec_find_decoder(AV_CODEC_ID_BMP);
    if (!dec_codec) {
        fprintf(stderr, "Decoder not found\n");
        return 1;
    }
    
    AVCodecContext *dec_ctx = avcodec_alloc_context3(dec_codec);
    if (!dec_ctx) {
        fprintf(stderr, "Could not allocate decoder context\n");
        return 1;
    }

    if ((ret = avcodec_open2(dec_ctx, dec_codec, NULL)) < 0) {
        fprintf(stderr, "Could not open decoder\n");
        return 1;
    }

    AVOutputFormat *out_fmt = av_guess_format("gif", NULL, NULL);
    if (!out_fmt) {
        fprintf(stderr, "Could not find output format\n");
        return 1;
    }

    AVFormatContext *out_ctx = NULL;
    if ((ret = avformat_alloc_output_context2(&out_ctx, out_fmt, NULL, NULL)) < 0) {
        fprintf(stderr, "Could not allocate output context\n");
        return 1;
    }

    stream = avformat_new_stream(out_ctx, NULL);
    if (!stream) {
        fprintf(stderr, "Could not create new stream\n");
        return 1;
    }

    enc_codec = avcodec_find_encoder(AV_CODEC_ID_GIF);
    if (!enc_codec) {
        fprintf(stderr, "Encoder not found\n");
        return 1;
    }

    enc_ctx = avcodec_alloc_context3(enc_codec);
    if (!enc_ctx) {
        fprintf(stderr, "Could not allocate encoder context\n");
        return 1;
    }

    if ((ret = avcodec_parameters_from_context(stream->codecpar, dec_ctx)) < 0) {
        fprintf(stderr, "Could not copy decoder parameters\n");
        return 1;
    }

    if ((ret = avcodec_open2(enc_ctx, enc_codec, NULL)) < 0) {
        fprintf(stderr, "Could not open encoder\n");
        return 1;
    }
    
    enc_ctx->pix_fmt = AV_PIX_FMT_RGB8; 
    enc_ctx->width = dec_ctx->width; 
    enc_ctx->height = dec_ctx->height; 
    enc_ctx->time_base = (AVRational){1, 25}; 

    avformat_init_output(out_ctx, NULL);

    if (!(out_fmt->flags & AVFMT_NOFILE)) {
        if ((ret = avio_open(&out_ctx->pb, "output.gif", AVIO_FLAG_WRITE)) < 0) {
            fprintf(stderr, "Could not open output file\n");
            return ret;
        }
    }

    avformat_write_header(out_ctx, NULL);

    AVFrame *frame = av_frame_alloc();
    AVPacket pkt;
    int frame_count = 0;

    while (av_read_frame(fmt_ctx, &pkt) >= 0) {
        avcodec_send_packet(dec_ctx, &pkt);
        while (avcodec_receive_frame(dec_ctx, frame) == 0) {
            avcodec_send_frame(enc_ctx, frame);
            while (avcodec_receive_packet(enc_ctx, &pkt) == 0) {
                pkt.stream_index = stream->index;
                av_interleaved_write_frame(out_ctx, &pkt);
                av_packet_unref(&pkt);
            }

            frame_count++;
            printf("Encoded frame %d\n", frame_count);
        }
        av_packet_unref(&pkt);
    }

    av_write_trailer(out_ctx);

    avcodec_close(enc_ctx);
    avcodec_free_context(&enc_ctx);
    avcodec_close(dec_ctx);
    avcodec_free_context(&dec_ctx);
    av_frame_free(&frame);

    avformat_close_input(&fmt_ctx);
    avformat_free_context(fmt_ctx);
    avio_close(out_ctx->pb);
    avformat_free_context(out_ctx);

    return 0;
}



    


    Belw is shell of compile script for gif :

    


    #!/bin/sh
gcc -o x2gif x2gif.c -L ./output/lib/ -l:libavformat.a -l:libavcodec.a -l:libavutil.a -lz -I ./output/include/


    


    Unfortunately, the compilation did not pass.
How can I troubleshoot and resolve this issue ?

    


    ./output/lib/libavutil.a(lfg.o): In function `av_bmg_get':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/lfg.c:59: undefined reference to `log'
./output/lib/libavutil.a(hwcontext_cuda.o): In function `cuda_free_functions':
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:175: undefined reference to `dlclose'
./output/lib/libavutil.a(hwcontext_cuda.o): In function `cuda_load_functions':
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:192: undefined reference to `dlopen'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:194: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:195: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:196: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:197: undefined reference to `dlsym'
/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:198: undefined reference to `dlsym'
./output/lib/libavutil.a(hwcontext_cuda.o):/data1/yang/tool/ffmpeg-3.4.4/./compat/cuda/dynlink_loader.h:199: more undefined references to `dlsym' follow
./output/lib/libavutil.a(rational.o): In function `av_d2q':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/rational.c:120: undefined reference to `floor'
./output/lib/libavutil.a(eval.o): In function `eval_expr':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:184: undefined reference to `exp'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:185: undefined reference to `exp'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:189: undefined reference to `floor'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:190: undefined reference to `ceil'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:191: undefined reference to `trunc'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:192: undefined reference to `round'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:263: undefined reference to `pow'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:300: undefined reference to `floor'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:309: undefined reference to `pow'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:315: undefined reference to `hypot'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:316: undefined reference to `atan2'
./output/lib/libavutil.a(eval.o): In function `ff_exp10':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/ffmath.h:44: undefined reference to `exp2'
./output/lib/libavutil.a(eval.o): In function `parse_primary':
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:417: undefined reference to `sinh'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:418: undefined reference to `cosh'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:419: undefined reference to `tanh'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:420: undefined reference to `sin'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:421: undefined reference to `cos'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:422: undefined reference to `tan'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:423: undefined reference to `atan'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:424: undefined reference to `asin'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:425: undefined reference to `acos'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:426: undefined reference to `exp'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:427: undefined reference to `log'
/data1/yang/tool/ffmpeg-3.4.4/libavutil/eval.c:428: undefined reference to `fabs'


    


  • FFMPEG libav gdigrab capturing with wrong colors

    7 mars 2018, par user1496491

    I’m capturing screen with code below, and it gets me the picture with wrong colors.

    Screenshot

    The picture on left is raw data which I assumed in ARGB the picture in right is encoded as YUV. I’ve tried different formats, the pictures slighly changing, but it’s never looks ow it should be. In what format gdigrab gives its output ? What’s the right way to encode it ?

    #include "MainWindow.h"

    #include <qguiapplication>
    #include <qlabel>
    #include <qscreen>
    #include <qtimer>
    #include <qlayout>
    #include <qimage>
    #include <qtconcurrent></qtconcurrent>QtConcurrent>
    #include <qthreadpool>

    #include "ScreenCapture.h"

    MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
    {
       resize(800, 600);

       label = new QLabel();
       label->setAlignment(Qt::AlignHCenter | Qt::AlignVCenter);

       auto layout = new QHBoxLayout();
       layout->addWidget(label);

       auto widget = new QWidget();
       widget->setLayout(layout);
       setCentralWidget(widget);

       init();
       initOutFile();
       collectFrame();
    }

    MainWindow::~MainWindow()
    {
       avformat_close_input(&amp;inputFormatContext);
       avformat_free_context(inputFormatContext);

       QThreadPool::globalInstance()->waitForDone();
    }

    void MainWindow::init()
    {

       av_register_all();
       avcodec_register_all();
       avdevice_register_all();

       auto screen = QGuiApplication::screens()[1];
       QRect geometry = screen->geometry();

       inputFormatContext = avformat_alloc_context();

       AVDictionary* options = NULL;
       av_dict_set(&amp;options, "framerate", "30", NULL);
       av_dict_set(&amp;options, "offset_x", QString::number(geometry.x()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "offset_y", QString::number(geometry.y()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "preset", "ultrafast", NULL);
       av_dict_set(&amp;options, "probesize", "10MB", NULL);
       av_dict_set(&amp;options, "pix_fmt", "yuv420p", NULL);
       av_dict_set(&amp;options, "video_size", QString(QString::number(geometry.width()) + "x" + QString::number(geometry.height())).toLatin1().data(), NULL);

       AVInputFormat* inputFormat = av_find_input_format("gdigrab");
       avformat_open_input(&amp;inputFormatContext, "desktop", inputFormat, &amp;options);

    //    AVDictionary* options = NULL;
    //    av_dict_set(&amp;options, "framerate", "30", NULL);
    //    av_dict_set(&amp;options, "preset", "ultrafast", NULL);
    //    av_dict_set(&amp;options, "vcodec", "h264", NULL);
    //    av_dict_set(&amp;options, "s", "1280x720", NULL);
    //    av_dict_set(&amp;options, "crf", "0", NULL);
    //    av_dict_set(&amp;options, "rtbufsize", "100M", NULL);

    //    AVInputFormat *format = av_find_input_format("dshow");
    //    avformat_open_input(&amp;inputFormatContext, "video=screen-capture-recorder", format, &amp;options);

       av_dict_free(&amp;options);
       avformat_find_stream_info(inputFormatContext, NULL);

       videoStreamIndex = av_find_best_stream(inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

       inputCodec = avcodec_find_decoder(inputFormatContext->streams[videoStreamIndex]->codecpar->codec_id);
       if(!inputCodec) qDebug() &lt;&lt; "Не найден кодек входящего потока!";

       inputCodecContext = avcodec_alloc_context3(inputCodec);
       inputCodecContext->codec_id = inputCodec->id;

       avcodec_parameters_to_context(inputCodecContext, inputFormatContext->streams[videoStreamIndex]->codecpar);

       if(avcodec_open2(inputCodecContext, inputCodec, NULL)) qDebug() &lt;&lt; "Не удалось открыть входной кодек!";
    }

    void MainWindow::initOutFile()
    {
       const char* filename = "C:/Temp/output.mp4";

       if(avformat_alloc_output_context2(&amp;outFormatContext, NULL, NULL, filename) &lt; 0) qDebug() &lt;&lt; "Не удалось создать выходной контекст!";

       outCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
       if(!outCodec) qDebug() &lt;&lt; "Не удалось найти кодек!";

       videoStream = avformat_new_stream(outFormatContext, outCodec);
       videoStream->time_base = {1, 30};

       const AVPixelFormat* pixelFormat = outCodec->pix_fmts;
       while (*pixelFormat != AV_PIX_FMT_NONE)
       {
           qDebug() &lt;&lt; "OUT_FORMAT" &lt;&lt; av_get_pix_fmt_name(*pixelFormat);
           ++pixelFormat;
       }

       outCodecContext = videoStream->codec;
       outCodecContext->bit_rate = 400000;
       outCodecContext->width = inputCodecContext->width;
       outCodecContext->height = inputCodecContext->height;
       outCodecContext->time_base = videoStream->time_base;
       outCodecContext->gop_size = 10;
       outCodecContext->max_b_frames = 1;
       outCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;

       if (outFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER) outCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

       if(avcodec_open2(outCodecContext, outCodec, NULL)) qDebug() &lt;&lt; "Не удалось открыть выходной кодек!";

       swsContext = sws_getContext(inputCodecContext->width,
                                   inputCodecContext->height,
    //                                inputCodecContext->pix_fmt,
                                   AV_PIX_FMT_ABGR,
                                   outCodecContext->width,
                                   outCodecContext->height,
                                   outCodecContext->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);

       if(avio_open(&amp;outFormatContext->pb, filename, AVIO_FLAG_WRITE) &lt; 0) qDebug() &lt;&lt; "Не удалось открыть файл!";
       if(avformat_write_header(outFormatContext, NULL) &lt; 0) qDebug() &lt;&lt; "Не удалось записать заголовок!";
    }

    void MainWindow::collectFrame()
    {
       AVFrame* inFrame = av_frame_alloc();
       inFrame->format = inputCodecContext->pix_fmt;
       inFrame->width = inputCodecContext->width;
       inFrame->height = inputCodecContext->height;

       int size = av_image_alloc(inFrame->data, inFrame->linesize, inFrame->width, inFrame->height, inputCodecContext->pix_fmt, 1);
       qDebug() &lt;&lt; size;

       AVFrame* outFrame = av_frame_alloc();
       outFrame->format = outCodecContext->pix_fmt;
       outFrame->width = outCodecContext->width;
       outFrame->height = outCodecContext->height;

       qDebug() &lt;&lt; av_image_alloc(outFrame->data, outFrame->linesize, outFrame->width, outFrame->height, outCodecContext->pix_fmt, 1);

       AVPacket packet;
       av_init_packet(&amp;packet);

       av_read_frame(inputFormatContext, &amp;packet);
    //    while(av_read_frame(inputFormatContext, &amp;packet) >= 0)
    //    {
           if(packet.stream_index == videoStream->index)
           {

               memcpy(inFrame->data[0], packet.data, size);

               sws_scale(swsContext, inFrame->data, inFrame->linesize, 0, inputCodecContext->height, outFrame->data, outFrame->linesize);

               QImage image(inFrame->data[0], inFrame->width, inFrame->height, QImage::Format_ARGB32);
               label->setPixmap(QPixmap::fromImage(image).scaled(label->size(), Qt::KeepAspectRatio));

               AVPacket outPacket;
               av_init_packet(&amp;outPacket);

               int encodeResult = avcodec_receive_packet(outCodecContext, &amp;outPacket);
               while(encodeResult == AVERROR(EAGAIN))
               {
                   if(avcodec_send_frame(outCodecContext, outFrame)) qDebug() &lt;&lt; "Ошибка отправки фрейма на кодирование!";

                   encodeResult = avcodec_receive_packet(outCodecContext, &amp;outPacket);
               }
               if(encodeResult != 0) qDebug() &lt;&lt; "Ошибка во время кодирования!" &lt;&lt; encodeResult;

               if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
               if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

               av_write_frame(outFormatContext, &amp;outPacket);

               av_packet_unref(&amp;outPacket);
           }
    //    }

       av_packet_unref(&amp;packet);

       av_write_trailer(outFormatContext);
       avio_close(outFormatContext->pb);
    }
    </qthreadpool></qimage></qlayout></qtimer></qscreen></qlabel></qguiapplication>
  • Matomo Launches Global Partner Programme to Deepen Local Connections and Champion Ethical Analytics

    25 juin, par Matomo Core Team — Press Releases

    Matomo introduces a global Partner Programme designed to connect organisations with trusted local experts, advancing its commitment to privacy, data sovereignty, and localisation.

    Wellington, New Zealand 25 June 2025 Matomo, the leading web analytics platform, is
    proud to announce the launch of the Matomo Partner Programme. This new initiative marks a significant step in Matomo’s global growth strategy, bringing together a carefully selected
    network of expert partners to support customers with localised, hightrust analytics services
    rooted in shared values.

    As privacy concerns rise and organisations seek alternatives to mainstream analytics solutions, the need for regional expertise has never been more vital. The Matomo Partner Programme ensures that customers around the world are supported not just by a worldclass platform, but by trusted local professionals who understand their specific regulatory, cultural, and business needs.

    “Matomo is evolving. As privacy regulations become more nuanced and the need for regional
    understanding grows, we’ve made localisation a central pillar of our strategy. Our partners are
    the key to helping customers navigate these complexities with confidence and care,” said
    Adam Taylor, Chief Operating Officer at Matomo.

    Local Experts, Global Values

    At the heart of the Matomo Partner Programme is a commitment to connect clients with local experts who live and breathe their markets. These partners are more than service
    providersthey’re trusted advisors who bring deep insight into their region’s privacy
    legislation, cultural norms, sectorspecific requirements, and digital trends.

    The programme empowers partners to act as extensions of Matomo’s core teams :

    As Customer Success allies, delivering personalised training, support, and technical
    services in local languages and time zones.
    As Sales ambassadors, raising awareness of ethical analytics in both public and private
    sectors, where trust, compliance, and transparency are crucial.

    This decentralised, valuesaligned approach ensures that every Matomo customer benefits
    from localised delivery with global consistency.

    A Programme Designed for Impactful Partnerships

    The Matomo Partner Programme is open to organisations who share a commitment to ethical, open-source analytics and can demonstrate :

    Technical excellence in deploying, configuring, and supporting Matomo Analytics in diverse environments.
    Deep market understanding, allowing them to tell the Matomo story in ways that
    resonate locally.
    Commercial strength to position Matomo across key industries, particularly in sectors with complex compliance and data sovereignty demands.

    Partners who meet these standards will be recognised as ‘Official Matomo Partners’— a symbol of excellence, credibility, and shared purpose. With this status, they gain access to :

    Brand alignment and trust : Strengthen credibility with clients by promoting their
    connection to Matomo and its globally respected ethical stance.
    Go-to-market support : Access to qualified leads, joint marketing, and tools to scale their business in a privacy-first market.
    Strategic collaboration : Early insights into the product roadmap and direct
    engagement with Matomo’s core team.
    Meaningful local impact : Help regional organisations reclaim control of their data and embrace ethical analytics with confidence.

    Ethical Analytics for Today’s World

    Matomo was founded in 2007 with the belief that people should have full control over their data. As the first opensource web analytics platform of its kind, Matomo continues to challenge the dominance of opaque, centralised tools by offering a transparent and flexible alternative that puts users first.

    In today’s landscapemarked by increased regulatory scrutiny, data protection concerns, and rapid advancements in AIMatomo’s approach is more relevant than ever. Opensource technology provides the adaptability organisations need to respond to local expectations while reinforcing digital trust with users.

    Whether it’s a government department, healthcare provider, educational institution, or
    commercial businessMatomo partners are on the ground, ready to help organisations
    transition to analytics that are not only powerful but principled.