Recherche avancée

Médias (1)

Mot : - Tags -/ipad

Autres articles (72)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (9066)

  • Why doesn't the ffmpeg output display the stream in the browser ? [closed]

    10 mai 2024, par Tebyy

    Why is it that when I create a livestream in Python using ffmpeg, and then I open the browser and visit the page, the page keeps loading continuously, and in PyCharm logs, I see binary data ? There are no errors displayed, and the code seems correct to me. I even tried saving to a file for testing purposes, and when I play the video, everything works fine. Does anyone know what might be wrong here ?

    


    Code :

    


    def generate_frames():
    cap = cv2.VideoCapture(os.path.normpath(app_root_dir().joinpath("data/temp", "video-979257305707693982.mp4")))
    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break

        yield frame


@app.route('/video_feed')
def video_feed():
    ffmpeg_command = [
        'ffmpeg', '-f', 'rawvideo', '-pix_fmt', 'bgr24',
        '-s:v', '1920x1080', '-r', '60',
        '-i', '-', '-vf', 'setpts=2.5*PTS', # Video Speed
        '-c:v', 'libvpx-vp9', '-g', '60', '-keyint_min', '60',
        '-b:v', '6M', '-minrate', '4M', '-maxrate', '12M', '-bufsize', '8M',
        '-crf', '0', '-deadline', 'realtime', '-tune', 'psnr', '-quality', 'good',
        '-tile-columns', '6', '-threads', '8', '-lag-in-frames', '16',
        '-f', 'webm', '-'
    ]
    ffmpeg_process = subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=-1)
    frames_generator = generate_frames()
    for frame in frames_generator:
        ffmpeg_process.stdin.write(frame)
        ffmpeg_process.stdin.flush()

    ffmpeg_process.stdin.close()
    ffmpeg_process.wait()

    def generate_video_stream(process):
        startTime = time.time()
        buffer = []
        sentBurst = False
        for chunk in iter(lambda: process.stderr.read(4096), b''):
            buffer.append(chunk)

            # Minimum buffer time, 3 seconds
            if sentBurst is False and time.time() > startTime + 3 and len(buffer) > 0:
                sentBurst = True
                for i in range(0, len(buffer) - 2):
                    print("Send initial burst #", i)
                    yield buffer.pop(0)

            elif time.time() > startTime + 3 and len(buffer) > 0:
                yield buffer.pop(0)

            process.poll()
            if isinstance(process.returncode, int):
                if process.returncode > 0:
                    print('FFmpeg Error', process.returncode)

                break

    return Response(stream_with_context(generate_video_stream(ffmpeg_process)), mimetype='video/webm', content_type="video/webm; codecs=vp9", headers=Headers([("Connection", "close")]))



    


  • FFmpeg - MJPEG decoding - getting different values

    27 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine.

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give —filetype png to test the png decoding).

  • FFmpeg cant recognize 3 channels with each 32 bit

    4 avril 2022, par Chryfi

    I am writing the linearized depth buffer of a game to openEXR using FFmpeg. Unfortunately, FFmpeg does not adhere to the openEXR file specification fully (like allowing unsigned integer for one channel) so I am writing one float channel to openEXR, which is put into the green channel with this command -f rawvideo -pix_fmt grayf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr.

    


    The float range is from 0F to 1F and it is linear. I can confirm that the calculation and linearization is correct by testing 16 bit integer (per pixel component) PNG in Blender compositor. The 16 bit integer data is written like this short s = (short) (linearzieDepth(depth) * (Math.pow(2,16) - 1)) whereas for float the linearized value is directly written to OpenEXR without multiplying with a value.

    


    However, when viewing the openEXR file it doesn't have the same "gradient" as the 16 bit png... when viewing them side by side, it appears as if the values near 0 are not linear, and they are not as dark as they should be like in the 16 bit png.
(And yes, I set the image node to linear), and comparing it with 3d tracking data from the game I cant reproduce the depth and cant mask things using the depth buffer where as with the png I can.

    


    How is it possible for a linear float range to turn out so different to a linear integer range in an image ?

    


    UPDATE :

    


    I now write 3 channels to the ffmpeg with this code

    


    float f2 = this.linearizeDepth(depth);

buffer.putFloat(f2);
buffer.putFloat(0);
buffer.putFloat(0);


    


    the byte buffer is of the size width * height * 3 * 4 -> 3 channels with each 4 bytes. The command is now -f rawvideo -pix_fmt gbrpf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr which should mean that the input (byte buffer) is expecting 32 bit floats with 3 channels. This is how it turns out

    


    FFmpeg is somehow splitting up channels or whatever... could be a bug, could be my fault ?