Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (70)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (7277)

  • Converting uint8_t data to AVFrame with FFmpeg

    30 octobre 2017, par J.Lefebvre

    I am currently working in C++ with the Autodesk 3DStudio Max 2014 SDK (toolset 100) and the Ffmpeg library in Visual Studio 2015 and trying to convert a DIB (Device Independent Bitmap) to uint8_t pointer array and then convert these data to an AVFrame.

    I don’t have any errors, but my video is still black and without meta data.
    (no time display, etc)

    I made approximatively the same with a Visual Studio Console application to convert jpeg image sequence from disk and this is working fine.
    (The only difference is that instead of converting jpeg to AVFrame with the Ffmpeg library, I try to convert raw data to an AVFrame.)

    So I think the problem could be either on the DIB conversion to the uint8_t data or the uint8_t data to the AVFrame.
    (The second is more plausible, because I used the SFML library to display a window with my rgb uint8_t* data for debuging and it is working fine.)

    I first initialize the ffmpeg library :

    This function is called once at the beginning.

    int Converter::Initialize(AVCodecID codec_id, int width, int height, int fps, const char *filename)
    {
       avcodec_register_all();
       av_register_all();

       AVCodec *codec;
       inputFrame = NULL;
       codecContext = NULL;
       pkt = NULL;
       file = NULL;
       outputFilename = new char[strlen(filename)]();
       *outputFilename = '\0';
       strcpy(outputFilename, filename);

       int ret;

       //Initializing AVCodecContext and getting PixelFormat supported by encoder
       codec = avcodec_find_encoder(codec_id);
       if (!codec)
           return 1;

       AVPixelFormat pixFormat = codec->pix_fmts[0];
       codecContext = avcodec_alloc_context3(codec);
       if (!codecContext)
           return 1;

       codecContext->bit_rate = 400000;
       codecContext->width = width;
       codecContext->height = height;
       codecContext->time_base.num = 1;
       codecContext->time_base.den = fps;
       codecContext->gop_size = 10;
       codecContext->max_b_frames = 1;
       codecContext->pix_fmt = pixFormat;

       if (codec_id == AV_CODEC_ID_H264)
           av_opt_set(codecContext->priv_data, "preset", "slow", 0);

       //Actually opening the encoder
       if (avcodec_open2(codecContext, codec, NULL) < 0)
           return 1;

       file = fopen(outputFilename, "wb");
       if (!file)
           return 1;

       inputFrame = av_frame_alloc();
       inputFrame->format = codecContext->pix_fmt;
       inputFrame->width = codecContext->width;
       inputFrame->height = codecContext->height;

       ret = av_image_alloc(inputFrame->data, inputFrame->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);

       if (ret < 0)
           return 1;

       return 0;
    }

    Then for each frame, I get the DIB and convert to a uint8_t* it with this function :

    uint8_t* Util::ToUint8_t(RGBQUAD *data, int width, int height)
    {
       uint8_t* buf = (uint8_t*)data;

       int imageSize = width * height;
       size_t rgbquad_size = sizeof(RGBQUAD);
       size_t total_bytes = imageSize * rgbquad_size;
       uint8_t * pCopyBuffer = new uint8_t[total_bytes];

       for (int x = 0; x < width; x++)
       {
           for (int y = 0; y < height; y++)
           {
               int index = (x + width * y) * rgbquad_size;
               int invertIndex = (x + width* (height - y - 1)) * rgbquad_size;

               //BGRA to RGBA
               pCopyBuffer[index] = buf[invertIndex + 2];
               pCopyBuffer[index + 1] = buf[invertIndex + 1];
               pCopyBuffer[index + 2] = buf[invertIndex];
               pCopyBuffer[index + 3] = 0xFF;
           }
       }

       return pCopyBuffer;
    }

    void GetDIBBuffer(Interface* ip, BITMAPINFO *bmi, uint8_t** outBuffer)
    {
       int size;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       BITMAPINFOHEADER *bmih = (BITMAPINFOHEADER *)bmi;
       view.getGW()->getDIB(bmi, &size);

       uint8_t * pCopyBuffer = Util::ToUint8_t(bmi->bmiColors, bmih->biWidth, bmih->biHeight);

       *outBuffer = pCopyBuffer;
    }

    This function is used to get the DIB :

    void GetViewportDIB(Interface* ip, BITMAPINFO *bmi, BITMAPINFOHEADER *bmih, BitmapInfo biFile, Bitmap *map)
    {
       int size;

       if (!biFile.Name()[0])
           return;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       bmih = (BITMAPINFOHEADER *)bmi;

       view.getGW()->getDIB(bmi, &size);

       biFile.SetWidth((WORD)bmih->biWidth);
       biFile.SetHeight((WORD)bmih->biHeight);
       biFile.SetType(BMM_TRUE_32);

       map = TheManager->Create(&biFile);
       map->OpenOutput(&biFile);
       map->FromDib(bmi);
       map->Write(&biFile);
       map->Close(&biFile);
    }

    And after the conversion to AVFrame and video encoding :

    The EncodeFromMem function is call each frame.

    int Converter::EncodeFromMem(const char *outputDir, int frameNumber, uint8_t* data)
    {
       int ret;

       inputFrame->pts = frameNumber;
       EncodeFrame(data, codecContext, inputFrame, &pkt, file);

       return 0;
    }

    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *swsCtx = NULL;
       const int in_linesize[1] = { 3 * c->width };// RGB stride
       swsCtx = sws_getCachedContext(swsCtx, c->width, c->height, AV_PIX_FMT_RGB24, c->width, c->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
       sws_scale(swsCtx, (const uint8_t * const *)&rgb, in_linesize, 0, c->height, frame->data, frame->linesize);
    }

    static void EncodeFrame(uint8_t *rgb, AVCodecContext *c, AVFrame *frame, AVPacket **pkt, FILE *file)
    {
       int ret, got_output;

       RgbToYuv(rgb, c, frame);

       *pkt = av_packet_alloc();
       av_init_packet(*pkt);
       (*pkt)->data = NULL;
       (*pkt)->size = 0;

       ret = avcodec_encode_video2(c, *pkt, frame, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame/n");
           exit(1);
       }
       if (got_output)
       {
           fwrite((*pkt)->data, 1, (*pkt)->size, file);
           av_packet_unref(*pkt);
       }
    }

    To finish I have a function that write the packets and free the memory :
    This function is called once at the end of the time range.

    int Converter::Finalize()
    {
       int ret, got_output;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       /* get the delayed frames */
       do
       {
           fflush(stdout);
           ret = avcodec_encode_video2(codecContext, pkt, NULL, &got_output);
           if (ret < 0)
           {
               fprintf(stderr, "Error encoding frame/n");
               return 1;
           }
           if (got_output)
           {
               fwrite(pkt->data, 1, pkt->size, file);
               av_packet_unref(pkt);
           }
       } while (got_output);

       fwrite(endcode, 1, sizeof(endcode), file);
       fclose(file);

       avcodec_close(codecContext);
       av_free(codecContext);

       av_frame_unref(inputFrame);
       av_frame_free(&inputFrame);
       //av_freep(&inputFrame->data[0]); //Crash

       delete outputFilename;
       outputFilename = 0;

       return 0;
    }

    EDIT :

    I modify my RgbToYuv function and create another one to convert back the yuv frame to an rgb one.

    This not really solve the problem, but maybe focus the problem on the conversion from YuvToRgb.

    This is the result of the conversion from YUV to RGB :

     ![YuvToRgb result] : https://img42.com/kHqpt+

    static void YuvToRgb(AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, *(frame)->data, AV_PIX_FMT_RGB24, c->width, c->height);
       sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, c->height, rgbPictInfo->data, rgbPictInfo->linesize);

       Util::DebugWindow(c->width, c->height, rgbPictInfo->data[0]);
    }
    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, rgb, AV_PIX_FMT_RGBA, c->width, c->height);

       struct SwsContext *swsCtx = sws_getContext(c->width, c->height, AV_PIX_FMT_RGBA, c->width, c->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
       avpicture_fill((AVPicture*)frame, rgb, AV_PIX_FMT_YUV420P, c->width, c->height);    
       sws_scale(swsCtx, rgbPictInfo->data, rgbPictInfo->linesize, 0, c->height, frame->data, frame->linesize);

       YuvToRgb(c, frame);
    }
  • Accented characters are not recognized in python [closed]

    10 avril 2023, par CorAnna

    I have a problem in the python script, my script should put subtitles in a video given a srt file, this srt file is written by another script but in its script it replaces the accents and all the particular characters with a black square symbol with a question mark inside it... the problem I think lies in the writing of this file, what follows and that in overwriting the subtitles I do with ffmpeg the sentences that contain an accented word are not written

    


    def video_audio_file_writer(video_file):

    videos_folder = "Video"
    audios_folder = "Audio"

    video_path = f"{videos_folder}\\{video_file}"

    video_name = Path(video_path).stem
    audio_name = f"{video_name}Audio"

    audio_path = f"{audios_folder}\\{audio_name}.wav"

    video = mp.VideoFileClip(video_path)
    audio = video.audio.write_audiofile(audio_path)

    return video_path, audio_path, video_name

    def audio_file_transcription(audio_path, lang):

    model = whisper.load_model("base")
    tran = gt.Translator()

    audio_file = str(audio_path)

    options = dict(beam_size=5, best_of=5)
    translate = dict(task="translate", **options)
    result = model.transcribe(audio_file, **translate)   

    return result

def audio_subtitles_transcription(result, video_name):

    subtitle_folder = "Content"
    subtitle_name = f"{video_name}Subtitle"
    subtitle_path_form = "srt"

    subtitle_path = f"{subtitle_folder}\\{subtitle_name}.{subtitle_path_form}"

    with open(os.path.join(subtitle_path), "w") as srt:
        # write_vtt(result["segments"], file=vtt)
        write_srt(result["segments"], file=srt)
            
    return subtitle_path

def video_subtitles(video_path, subtitle_path, video_name):

    video_subtitled_folder = "VideoSubtitles"
    video_subtitled_name = f"{video_name}Subtitles"
    video_subtitled_path = f"{video_subtitled_folder}\\{video_subtitled_name}.mp4"

    video_path_b = bytes(video_path, 'utf-8')
    subtitle_path_b = bytes(subtitle_path, 'utf-8')
    video_subtitled_path_b = bytes(video_subtitled_path, 'utf-8')

    path_abs_b = os.getcwdb() + b"\\"

    path_abs_bd = path_abs_b.decode('utf-8')
    video_path_bd= video_path_b.decode('utf-8')
    subtitle_path_bd = subtitle_path_b.decode('utf-8')
    video_subtitled_path_bd = video_subtitled_path_b.decode('utf-8')

    video_path_abs = str(path_abs_bd + video_path_bd)
    subtitle_path_abs = str(path_abs_bd + subtitle_path_bd).replace("\\", "\\\\").replace(":", "\\:")
    video_subtitled_path_abs = str(path_abs_bd + video_subtitled_path_bd)

    time.sleep(3)

    os.system(f"ffmpeg -i {video_path_abs} -vf subtitles='{subtitle_path_abs}' -y {video_subtitled_path_abs}")

    return video_subtitled_path_abs, video_path_abs, subtitle_path_abs

if __name__ == "__main__":

    video_path, audio_path, video_name = video_audio_file_writer(video_file="ChiIng.mp4")
    result = audio_file_transcription(audio_path=audio_path, lang="it")
    subtitle_path = audio_subtitles_transcription(result=result, video_name=video_name)
    video_subtitled_path_abs, video_path_abs, subtitle_path_abs = video_subtitles(video_path=video_path, subtitle_path=subtitle_path, video_name=video_name)
    
    print("Video Subtitled")


    


    Windows 11
Python 3.10

    


  • Bit-field badness

    30 janvier 2010, par Mans — Compilers, Optimisation

    Consider the following C code which is based on an real-world situation.

    struct bf1_31 
        unsigned a:1 ;
        unsigned b:31 ;
     ;
    

    void func(struct bf1_31 *p, int n, int a)

    int i = 0 ;
    do
    if (p[i].a)
    p[i].b += a ;
    while (++i < n) ;

    How would we best write this in ARM assembler ? This is how I would do it :

    func :
            ldr     r3,  [r0], #4
            tst     r3,  #1
            add     r3,  r3,  r2,  lsl #1
            strne   r3,  [r0, #-4]
            subs    r1,  r1,  #1
            bgt     func
            bx      lr
    

    The add instruction is unconditional to avoid a dependency on the comparison. Unrolling the loop would mask the latency of the ldr instruction as well, but that is outside the scope of this experiment.

    Now compile this code with gcc -march=armv5te -O3 and watch in horror :

    func :
            push    r4
            mov     ip, #0
            mov     r4, r2
    loop :
            ldrb    r3, [r0]
            add     ip, ip, #1
            tst     r3, #1
            ldrne   r3, [r0]
            andne   r2, r3, #1
            addne   r3, r4, r3, lsr #1
            orrne   r2, r2, r3, lsl #1
            strne   r2, [r0]
            cmp     ip, r1
            add     r0, r0, #4
            blt     loop
            pop     r4
            bx      lr
    

    This is nothing short of awful :

    • The same value is loaded from memory twice.
    • A complicated mask/shift/or operation is used where a simple shifted add would suffice.
    • Write-back addressing is not used.
    • The loop control counts up and compares instead of counting down.
    • Useless mov in the prologue ; swapping the roles or r2 and r4 would avoid this.
    • Using lr in place of r4 would allow the return to be done with pop {pc}, saving one instruction (ignoring for the moment that no callee-saved registers are needed at all).

    Even for this trivial function the gcc-generated code is more than twice the optimal size and slower by approximately the same factor.

    The main issue I wanted to illustrate is the poor handling of bit-fields by gcc. When accessing bitfields from memory, gcc issues a separate load for each field even when they are contained in the same aligned memory word. Although each load after the first will most likely hit L1 cache, this is still bad for several reasons :

    • Loads have typically two or three cycles result latency compared to one cycle for data processing instructions. Any bit-field can be extracted from a register with two shifts, and on ARM the second of these can generally be achieved using a shifted second operand to a following instruction. The ARMv6T2 instruction set also adds the SBFX and UBFX instructions for extracting any signed or unsigned bit-field in one cycle.
    • Most CPUs have more data processing units than load/store units. It is thus more likely for an ALU instruction than a load/store to issue without delay on a superscalar processor.
    • Redundant memory accesses can trigger early flushing of store buffers rendering these less efficient.

    No gcc bashing is complete without a comparison with another compiler, so without further ado, here is the ARM RVCT output (armcc --cpu 5te -O3) :

    func :
            mov     r3, #0
            push    r4, lr
    loop :
            ldr     ip, [r0, r3, lsl #2]
            tst     ip, #1
            addne   ip, ip, r2, lsl #1
            strne   ip, [r0, r3, lsl #2]
            add     r3, r3, #1
            cmp     r3, r1
            blt     loop
            pop     r4, pc
    

    This is much better, the core loop using only one instruction more than my version. The loop control is counting up, but at least this register is reused as offset for the memory accesses. More remarkable is the push/pop of two registers that are never used. I had not expected to see this from RVCT.

    Even the best compilers are still no match for a human.