Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (103)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (8124)

  • MAINTAINERS : update myself for dvdvideo, rcwtdec, rcwtenc

    26 septembre 2024, par Marth64
    MAINTAINERS : update myself for dvdvideo, rcwtdec, rcwtenc
    

    I plan to look after and test them for the forseeable future.
    I am not a committer but do care for these muxers/demuxers.

    Signed-off-by : Marth64 <marth64@proxyid.net>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] MAINTAINERS
  • avcodec : add avcodec_get_supported_config()

    3 avril 2024, par Niklas Haas
    avcodec : add avcodec_get_supported_config()
    

    This replaces the myriad of existing lists in AVCodec by a unified API
    call, allowing us to (ultimately) trim down the sizeof(AVCodec) quite
    substantially, while also making this more trivially extensible.

    In addition to the already covered lists, add two new entries for color
    space and color range, mirroring the newly added negotiable fields in
    libavfilter.

    Once the deprecation period passes for the existing public fields, the
    rough plan is to move the commonly used fields (such as
    pix_fmt/sample_fmt) into FFCodec, possibly as a union of audio and video
    configuration types, and then implement the rarely used fields with
    custom callbacks.

    • [DH] doc/APIchanges
    • [DH] libavcodec/avcodec.c
    • [DH] libavcodec/avcodec.h
    • [DH] libavcodec/codec.h
    • [DH] libavcodec/codec_internal.h
    • [DH] libavcodec/version.h
  • Cannot display a decoded video frame on Raylib

    20 décembre 2024, par gabriel_tiso

    I'm trying to explore libav and raylib just to understand how audio and video work, and also to learn how to build nice interfaces using the raylib project. I've implemented a simple struct capable of decoding audio and video frames. When a video frame appears, I convert it to the RGBA format, which packs the values into 32bpp. This is the setup :

    &#xA;

        if (av_image_alloc((uint8_t **)media->dst_frame->data,&#xA;                       media->dst_frame->linesize, media->ctxs[0]->width,&#xA;                       media->ctxs[0]->height, AV_PIX_FMT_RGBA, 1) &lt; 0) {&#xA;        fprintf(stderr, "Failed to setup dest image\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    media->sws_ctx = sws_getContext(&#xA;        media->ctxs[0]->width, media->ctxs[0]->height, media->ctxs[0]->pix_fmt,&#xA;        media->ctxs[0]->width, media->ctxs[0]->height, AV_PIX_FMT_RGBA,&#xA;        SWS_BILINEAR, NULL, NULL, 0);&#xA;&#xA;    // Later on, in the decode function:&#xA;    int ret = sws_scale(media->sws_ctx, media->frame->data,&#xA;                            media->frame->linesize, 0, media->frame->height,&#xA;                            media->dst_frame->data, media->dst_frame->linesize);&#xA;&#xA;

    &#xA;

    In the main file, I init raylib, and setup the necessary steps to load the texture (here I'm trying to fetch the first video frame in order to show the user a preview of the video, later on I plan to reset the stream to allow a correct playback routine). I think the format of the image is right.

    &#xA;

        Image previewImage =&#xA;        GenImageColor(videoArea.width, videoArea.height, BLACK);&#xA;    // I assume this makes the formats compatible&#xA;    ImageFormat(&amp;previewImage, PIXELFORMAT_UNCOMPRESSED_R8G8B8A8);&#xA;&#xA;    Texture2D videoTexture = LoadTextureFromImage(previewImage);&#xA;    UnloadImage(previewImage);&#xA;

    &#xA;

    &#xA;        if (!state->has_media) {&#xA;            DrawText("Drop a video file here!", videoArea.x &#x2B; 10,&#xA;                     videoArea.y &#x2B; 10, 20, GRAY);&#xA;        } else {&#xA;            if (state->first_frame) {&#xA;                do {&#xA;                    decode_packet(state->media);&#xA;                } while (!is_frame_video(state->media));&#xA;&#xA;                UpdateTexture(videoTexture, state->media->dst_frame->data[0]);&#xA;&#xA;                state->first_frame = 0;&#xA;            }&#xA;        }&#xA;&#xA;        DrawTexture(videoTexture, videoArea.x, videoArea.y, WHITE);&#xA;

    &#xA;

    Anyway, this is what I get when a mp4 file is dropped :&#xA;raylib window

    &#xA;

    It seems like an alignment issue maybe ? Can someone point me in the right direction in order to correctly solve this problem ?

    &#xA;