Recherche avancée

Médias (91)

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10540)

  • libav : Store h264 frames in mp4 container

    25 janvier 2024, par ImJustACowLol

    I'm making a C++ application that retrieves frames from a camera and then encodes each frame with a H264 encoder (not using libav). This encoded H264 frame is then kept in memory as a void *mem as I need to do several things with the encoded frame.

    


    One of the things I need to do, is store the frames (so the void *mem pointers) in a .mp4 container using libavcodec/libavformat. I do NOT want to transcode each frame, I just want to store them directly into the mp4 container.

    


    Preferably for each individual frame that I push through, I get the resulting data as a return type from the function (not sure if this is possible ?). If this is not possible, then writing to a file directly is OK as well.

    


    How does one go about doing this with libav ?

    


    The only part I have got so far, and where I'm getting stuck, is this :

    


    /*
some private fields accessible in MP4Muxer:
int frameWidth_, frameHeight_, frameRate_, srcBitRate_;
*/


void MP4Muxer::muxFrame(void *mem, size_t len, int64_t timestamp, bool keyFrame) {
    const AVOutputFormat* outputFormat = av_guess_format("mp4", NULL, NULL);
    AVFormatContext* outputFormatContext = avformat_alloc_context();
    outputFormatContext->oformat = outputFormat;
    AVStream* videoStream = avformat_new_stream(outputFormatContext, NULL);

    videoStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
    videoStream->codecpar->codec_id = AV_CODEC_ID_H264;
    videoStream->codecpar->width = frameWidth_;
    videoStream->codecpar->height = frameHeight_;
    videoStream->avg_frame_rate = (AVRational) {frameRate_, 1};
    videoStream->time_base = (AVRational) {1, 90000};

}


    


    How do I continue from here ? Are there any good resources I can follow ? There are some resources I found online, but all of them either write the output directly to a file, read input directly from streams/files etc. so I have a hard time translating them to my needs.

    


  • aarch64 : Add NEON optimizations for 10 and 12 bit vp9 loop filter

    5 janvier 2017, par Martin Storsjö
    aarch64 : Add NEON optimizations for 10 and 12 bit vp9 loop filter
    

    This work is sponsored by, and copyright, Google.

    This is similar to the arm version, but due to the larger registers
    on aarch64, we can do 8 pixels at a time for all filter sizes.

    Examples of runtimes vs the 32 bit version, on a Cortex A53 :
    ARM AArch64
    vp9_loop_filter_h_4_8_10bpp_neon : 213.2 172.6
    vp9_loop_filter_h_8_8_10bpp_neon : 281.2 244.2
    vp9_loop_filter_h_16_8_10bpp_neon : 657.0 444.5
    vp9_loop_filter_h_16_16_10bpp_neon : 1280.4 877.7
    vp9_loop_filter_mix2_h_44_16_10bpp_neon : 397.7 358.0
    vp9_loop_filter_mix2_h_48_16_10bpp_neon : 465.7 429.0
    vp9_loop_filter_mix2_h_84_16_10bpp_neon : 465.7 428.0
    vp9_loop_filter_mix2_h_88_16_10bpp_neon : 533.7 499.0
    vp9_loop_filter_mix2_v_44_16_10bpp_neon : 271.5 244.0
    vp9_loop_filter_mix2_v_48_16_10bpp_neon : 330.0 305.0
    vp9_loop_filter_mix2_v_84_16_10bpp_neon : 329.0 306.0
    vp9_loop_filter_mix2_v_88_16_10bpp_neon : 386.0 365.0
    vp9_loop_filter_v_4_8_10bpp_neon : 150.0 115.2
    vp9_loop_filter_v_8_8_10bpp_neon : 209.0 175.5
    vp9_loop_filter_v_16_8_10bpp_neon : 492.7 345.2
    vp9_loop_filter_v_16_16_10bpp_neon : 951.0 682.7

    This is significantly faster than the ARM version in almost
    all cases except for the mix2 functions.

    Based on START_TIMER/STOP_TIMER wrapping around a few individual
    functions, the speedup vs C code is around 2-3x.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavcodec/aarch64/Makefile
    • [DH] libavcodec/aarch64/vp9dsp_init_16bpp_aarch64_template.c
    • [DH] libavcodec/aarch64/vp9lpf_16bpp_neon.S
  • Moviepy has issues when concatenating ImageClips of different dimensions

    22 mars 2021, par Lysander Cox

    Example of the issues : https://drive.google.com/file/d/1WxfYtDTD0kc_4WQzzvB6QXkZWo-e2Vuk/view?usp=sharing

    &#xA;

    Here's the code that led to the issue :

    &#xA;

    def fragmentConcat(comment, filePrefix):&#xA;    finalClips = []&#xA;    dirName = filePrefix &#x2B; comment[&#x27;id&#x27;]&#xA;    vidClips = [mpy.VideoFileClip(dirName &#x2B; &#x27;/&#x27; &#x2B; file) for file &#xA;                in natsorted(os.listdir(dirName))]&#xA;    &#xA;    finalClip = mpy.concatenate_videoclips(vidClips, method = "compose")&#xA;    finalClips.append(finalClip)&#xA;    &#xA;    if &#x27;replies&#x27; in comment:&#xA;        for reply in comment[&#x27;replies&#x27;]:&#xA;            finalClips &#x2B;= fragmentConcat(reply, filePrefix)&#xA;            &#xA;    return finalClips&#xA;&#xA;def finalVideoMaker(thread):&#xA;    fragmentGen(thread)&#xA;    filePrefix = thread[&#x27;id&#x27;] &#x2B; &#x27;/videos/&#x27;&#xA;&#xA;    #Clips of comments and their children being read aloud.&#xA;    commentClips = []&#xA;&#xA;    for comment in thread[&#x27;comments&#x27;]:&#xA;        commentClipFrags = fragmentConcat(comment, filePrefix)&#xA;        commentClip = mpy.concatenate_videoclips(commentClipFrags, method = "compose")&#xA;        commentClips.append(commentClip)&#xA;&#xA;        #1 second of static to separate clips.&#xA;        staticVid = mpy.VideoFileClip(&#x27;assets/static.mp4&#x27;)&#xA;        commentClips.append(staticVid)&#xA;&#xA;    finalVid = mpy.concatenate_videoclips(commentClips)&#xA;    finalVid.write_videofile(thread[&#x27;id&#x27;] &#x2B; &#x27;/final.mp4&#x27;)&#xA;

    &#xA;

    I'm certain that these issues appear somewhere in here, because the individual video "fragments" (which are concatenated here) do not exhibit the issue with the clip I showed.

    &#xA;

    I have tried adding and removing the method = "compose" parameter. It does not seem to have an affect. How can I resolve this ? Thanks.

    &#xA;