Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (43)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (5612)

  • Android : BitmapFactory.decodeStream() returns null after first success

    27 avril 2014, par giraffe

    My code is intended to update an ImageView with an image from a server when a UI button is pressed.

    The client side code shown below is a Runnable that runs when the button is pressed. The server is a desktop Java application with ffmpeg running in the background, continuously updating image.png with an image from the webcam. When the button is pressed on the Android app, the Android app attempts to receive image.png from the server, and because ffmpeg is constantly updating this image, it should be the most recent image taken with the server’s webcam.

    My problem is that the first button press shows the correct image, but every subsequent button press will just clear out my ImageView. BitmapFactory.decodeStream() is returning null every time I call it after the first time.

    Client side (runs when button is pressed) :

    InputStream inputStream = s.getInputStream();
    img = BitmapFactory.decodeStream(inputStream);          
    jpgView.setImageBitmap(img);        
    jpgView.invalidate();

    Server side :

    ServerSocket sock = new ServerSocket(PORT_NUMBER);
    Socket clientSocket = sock.accept();
    for (;;) {
       File f = new File("C:/folder/image.png");
       FileInputStream fileInput = new FileInputStream(f);
       BufferedInputStream bufferedInput = new BufferedInputStream(fileInput);
       OutputStream outStream = clientSocket.getOutputStream();
       try {
           byte[] outBuffer = new byte[fSize];
           int bRead = bufferedInput.read(outBuffer, 0, outBuffer.length);
           outStream.write(outBuffer, 0, bRead);
           outStream.flush();
           } catch (Exception e) {
               e.printStackTrace();
           } finally {
               bufferedInput.close();
           }
       }
  • FFmpeg - avcodec_receive_frame returns 0 but frames are invalid

    9 juillet 2019, par Jinx

    I’ve been trying to extract images from videos, but not those with PNG codec. My code works fine with those with JPEG. avcodec_receive_frame worked successfully but the data of the frames was like trash ? Do I have to do something special to demuxing when dealing with PNG ?

    Exception thrown at 0x00007FFF7DF34B9A (msvcrt.dll) in Program.exe : 0xC0000005 : Access violation reading location 0x00000000000003F0 when calling avcodec_send_frame in my saveImage function, which means I was accessing invalid or unalloacted memory I guess. How this happened ?

    enter image description here

    Just suppose all the function calls returned 0 until exception thrown.

    Decoding :

    bool ImageExtractor::decode() {
       // some other code here
       ret = avcodec_send_packet(codec_ctx, packet); // returned 0
       ret = avcodec_receive_frame(codec_ctx, frame); // returned 0
       if (ret == 0) {
           if (count >= target_frame) {
               snprintf(buf, sizeof(buf), "%s/%d.png", destination.toLocal8Bit().data(), count);
               saveImage(frame, buf); // a function that writes images on disk
          }
       // some other code here
    }


    bool ImageExtractor::saveImage(AVFrame *frame, char *destination) {
        AVFormatContext *imgFmtCtx = avformat_alloc_context();
        pFormatCtx->oformat = av_guess_format("mjpeg", NULL, NULL);

        // some other code here

       if (!frame)
           std::cout << "invalid frame \n";

       if (!imgCodecCtx) // AV_CODEC_ID_MJPEG(7)
           std::cout << "invalid codec ctx \n";

       ret = avcodec_send_frame(imgCodecCtx, frame); // everything stopped here
    }

    Demuxing :

    avformat_open_input(&format_ctx, source.toLocal8Bit(), nullptr, nullptr);
    vsi = av_find_best_stream(format_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
    codec_par = avcodec_parameters_alloc();
    avcodec_parameters_copy(codec_par, format_ctx->streams[vsi]->codecpar);

    AVCodec* codec = avcodec_find_decoder(codec_par->codec_id); // AV_CODEC_ID_PNG(61)
    codec_ctx = avcodec_alloc_context3(codec);
    avcodec_parameters_to_context(codec_ctx, codec_par);
    avcodec_parameters_free(&codec_par);
    avcodec_open2(codec_ctx, codec, 0);
  • ffmpeg : libavfilter API av_buffersink_get_frame returns alway EAGAIN

    29 juin 2024, par aculnaig

    I want to resize an image with libavfilter C API through zscale filter and libplacebo filter but no matter how I call av_buffersink_get_frame, it always returns EAGAIN and no data is filled in the filtered_frame.

    


    #include &#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavfilter></libavfilter>avfilter.h>&#xA;#include <libavfilter></libavfilter>buffersrc.h>&#xA;#include <libavfilter></libavfilter>buffersink.h>&#xA;#include <libavutil></libavutil>log.h>&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    if (argc != 2) {&#xA;        fprintf(stderr, "usage: %s <filename>\n", argv[0]);&#xA;        exit(EXIT_FAILURE);&#xA;    }&#xA;&#xA;    const char *src_name = argv[1];&#xA;    const char *dst_name = basename(src_name);&#xA;    int ret = 0;&#xA;&#xA;    const enum AVPixelFormat src_format = AV_PIX_FMT_YUV422P;&#xA;&#xA;    av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;    AVFormatContext *fmt_ctx = avformat_alloc_context();&#xA;    if (fmt_ctx == NULL) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avformat_alloc_context(): failed.\n");&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    if ((ret = avformat_open_input(&amp;fmt_ctx, src_name, NULL, NULL)) != 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avformat_open_input(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    AVCodecContext *dec_ctx;&#xA;    AVCodec *dec;&#xA;&#xA;    if ((ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;dec, 0)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "av_find_best_stream(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    dec_ctx = avcodec_alloc_context3(dec);&#xA;    if (dec_ctx == NULL) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avcodec_alloc_context3(): failed.\n");&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    if ((ret = avcodec_open2(dec_ctx, dec, NULL)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avcodec_open2(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    AVFrame *frame = av_frame_alloc();&#xA;    if (frame == NULL) {&#xA;        av_log(NULL, AV_LOG_TRACE, "av_frame_alloc(): failed.\n");&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    AVPacket *packet = av_packet_alloc();&#xA;    if (packet == NULL) {&#xA;        av_log(NULL, AV_LOG_TRACE, "av_packet_alloc(): failed.\n");&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    if ((ret = av_read_frame(fmt_ctx, packet)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "av_read_frame(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    ret = avcodec_send_packet(dec_ctx, packet);&#xA;    if (ret == AVERROR(EAGAIN)) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avcodec_send_packet(): %s.\n", av_err2str(ret));&#xA;        avcodec_receive_frame(dec_ctx, frame);&#xA;        avcodec_send_packet(dec_ctx, packet);&#xA;    }&#xA;    &#xA;    if ((ret = avcodec_receive_frame(dec_ctx, frame)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avcodec_receive_frame(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    av_packet_unref(packet);&#xA;&#xA;    av_log(NULL, AV_LOG_TRACE, "filename: %s, w: %d, h: %d, fmt: %d\n", src_name, frame->width, frame->height, frame->format);&#xA;&#xA;    AVFilterGraph *filter_graph = avfilter_graph_alloc();&#xA;&#xA;    char buffersrc_args[512];&#xA;    snprintf(buffersrc_args, sizeof(buffersrc_args), "video_size=%dx%d:pix_fmt=%d:time_base=1/25", frame->width, frame->height, frame->format); &#xA;&#xA;    AVFilterContext *buffersrc_ctx;&#xA;    if ((ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, avfilter_get_by_name("buffer"), NULL, buffersrc_args, NULL, filter_graph)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avfilter_graph_create_filter(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    char libplacebo_args[512];&#xA;    snprintf(libplacebo_args, sizeof(libplacebo_args), "format=yuv420p:colorspace=bt470bg:color_primaries=bt709:color_trc=iec61966-2-1:range=pc:w=(iw/2):h=(ih/2):downscaler=none:dithering=none");&#xA;    AVFilterContext *libplacebo_ctx;&#xA;    if ((ret = avfilter_graph_create_filter(&amp;libplacebo_ctx, avfilter_get_by_name("libplacebo"), NULL, libplacebo_args, NULL, filter_graph)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avfilter_graph_create_filter(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    AVFilterContext *buffersink_ctx;&#xA;    if ((ret = avfilter_graph_create_filter(&amp;buffersink_ctx, avfilter_get_by_name("buffersink"), NULL, NULL, NULL, filter_graph)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avfilter_graph_create_filter(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    if ((ret = avfilter_link(buffersrc_ctx, 0, libplacebo_ctx, 0)) != 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avfilter_link(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    if ((ret = avfilter_link(libplacebo_ctx, 0, buffersink_ctx, 0)) != 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avfilter_link(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    &#xA;    if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "avfilter_graph_config(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;&#xA;    AVFrame *filtered_frame = av_frame_alloc();&#xA;    if (filtered_frame == NULL) {&#xA;        av_log(NULL, AV_LOG_TRACE, "av_frame_alloc(): failed.\n");&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    if ((ret = av_buffersrc_add_frame(buffersrc_ctx, frame)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_TRACE, "av_buffersrc_add_frame(): %s.\n", av_err2str(ret));&#xA;        exit(EXIT_SUCCESS);&#xA;    }&#xA;    &#xA;    while (1) {&#xA;        int ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            break;&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_TRACE, "av_buffersink_get_frame(): %s.\n", av_err2str(ret));&#xA;            exit(EXIT_FAILURE);&#xA;        }    &#xA;    }&#xA;&#xA;    av_log(NULL, AV_LOG_TRACE, "filename: %s, w: %d, h: %d, f: %d\n", dst_name, filtered_frame->width, filtered_frame->height, filtered_frame->format);&#xA;&#xA;    AVCodecContext *enc_ctx;&#xA;    AVCodec *enc;&#xA;    AVPacket *enc_packet = av_packet_alloc();&#xA;&#xA;    enc = avcodec_find_encoder_by_name("mjpeg");&#xA;    enc_ctx = avcodec_alloc_context3(enc);&#xA;    enc_ctx->width = filtered_frame->width;&#xA;    enc_ctx->height = filtered_frame->height;&#xA;    enc_ctx->bit_rate = dec_ctx->bit_rate * 1024;&#xA;    enc_ctx->time_base = (AVRational) {1, 25};&#xA;    enc_ctx->framerate = (AVRational) {25, 1};&#xA;    enc_ctx->pix_fmt = filtered_frame->format;&#xA;    enc_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;&#xA;    enc_ctx->compression_level = 0;&#xA;    enc_ctx->color_range = AVCOL_RANGE_JPEG;&#xA;&#xA;    avcodec_open2(enc_ctx, enc, NULL);&#xA;&#xA;    avcodec_send_frame(enc_ctx, filtered_frame);&#xA;    avcodec_receive_packet(enc_ctx, enc_packet);&#xA;&#xA;    FILE *dst_file = fopen(dst_name, "wb");&#xA;    fwrite(enc_packet->data, 1, enc_packet->size, dst_file);&#xA;    fclose(dst_file);&#xA;&#xA;    av_packet_unref(enc_packet);&#xA;    av_frame_free(&amp;frame);&#xA;    av_frame_free(&amp;filtered_frame);&#xA;    &#xA;    avfilter_graph_free(&amp;filter_graph);&#xA;&#xA;    avformat_close_input(&amp;fmt_ctx);&#xA;    avformat_free_context(fmt_ctx);&#xA;    &#xA;    exit(EXIT_SUCCESS);&#xA;}&#xA;</filename>

    &#xA;

    and here are the logs

    &#xA;

    https://pastebin.com/bbMKHcdj

    &#xA;

    PS : the logs were so many I could paste in stackoverflow

    &#xA;