Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (64)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (12585)

  • Reading live output from FFMPEG using PHP

    29 septembre 2018, par user561787

    the problem i’m dealing with is getting the shell output from a ffmpeg command while it’s being executed and writing it in an html page using php.

    After some research i found a very similar request here :Update page content from live PHP and Python output using Ajax, which seemed to be perfect, but it’s not working at all.

    The basic idea is to use an AJAX request to invoke the PHP script, which should execute the command and echo the live read content from the process, taking care to use this.readyState==3 (otherwise the JS script would receive the response only upon completion)

    For the PHP section i tried using the code in the answer above, (obviously adapted to my needs) :

    function liveExecuteCommand($command){

       while (@ ob_end_flush()); // end all output buffers if any

       $proc = popen($command, 'r');

       $live_output     = "";
       $complete_output = "";

       while (!feof($proc))
       {
           $live_output     = fread($proc, 4096);
           $complete_output = $complete_output . $live_output;
           echo "<pre>$live_output</pre>";
           @ flush();
       }

       pclose($proc);          

    }

    And for the AJAX section i used

    function getLiveStream(){


           var ajax = new XMLHttpRequest();
             ajax.onreadystatechange = function() {
               if (this.readyState == 3) {

                 document.getElementById("result").innerHTML = this.responseText;
               }              
           };          
           var url = 'process/getlive';
           ajax.open('GET', url,true);
           ajax.send();
      }

    Which sadly doesn’t work.

    The command being executed is this : 'ffmpeg.exe -i "C:/Users/BACKUP/Desktop/xampp/htdocs/testarea/test.mp4" -map 0:0 -map 0:1 -c:v libx264 -preset fast -crf 26 -c:a libmp3lame -ar 24000 -q:a 5 "C:\Users\BACKUP\Desktop\xampp\htdocs\testarea\output/test.mkv"', which i tested and it works.

    When i run the html page and the ajax script within, the ffmpeg command doesn’t even run, as i checked in task managet. It simply returns a blank text.

    When i run the php script by itself, the command runs, the file is converted but it doesn’t echo anything at all.

    After some more research i also found this page, which seems to be made for this exact purpose : https://github.com/4poc/php-live-transcode/blob/master/stream.php

    The relevant section is at the end, the code before is for dealing with options specific to ffmpeg. But it didn’t work either, with the same exact outcomes.

    Now i’m considering simply writing the output to a file and reading it from it dinamically, but i’d really like to know the reason why they both don’t work for me.

    EDIT : PHP Execute shell command asynchronously and retrieve live output answers to how to get content from a temporary file that is being written, not directly from the process.

  • Blackmagic SDK - OpenGL Letterbox and FFMPEG

    18 mars 2015, par Marco Reisacher

    so I left my Idea using the DirectShow filters due to lack of Support of my needed Video formats.
    The native API uses OpenGL which i am a total beginner to.
    I stumbled upon the following problems :

    1. How to automatically apply a letterbox or pillarbox depending on the width and height of the Frame that gets passed to OpenGL (I’m using bpctrlanchormap.h to autosize everything and i get squeezed/stretched images)

    2. How to record a Video of the OpenGL stream (I looked around and saw that ffmpeg should be able to do so, but I can’t get it running.) Also what would be nice is Audio recording into the same file of a microphone

    I’m using the Blackmagic "Capture Preview" sample.

    This is the sourcecode that initialises the OpenGL renderer and passes the Frames

    #include "stdafx.h"
    #include <gl></gl>gl.h>
    #include "PreviewWindow.h"

    PreviewWindow::PreviewWindow()
    : m_deckLinkScreenPreviewHelper(NULL), m_refCount(1), m_previewBox(NULL), m_previewBoxDC(NULL), m_openGLctx(NULL)
    {}

    PreviewWindow::~PreviewWindow()
    {
       if (m_deckLinkScreenPreviewHelper != NULL)
       {
           m_deckLinkScreenPreviewHelper->Release();
           m_deckLinkScreenPreviewHelper = NULL;
       }

       if (m_openGLctx != NULL)
       {
           wglDeleteContext(m_openGLctx);
           m_openGLctx = NULL;
       }

       if (m_previewBoxDC != NULL)
       {
           m_previewBox->ReleaseDC(m_previewBoxDC);
           m_previewBoxDC = NULL;
       }
    }

    bool        PreviewWindow::init(CStatic *previewBox)
    {
       m_previewBox = previewBox;

       // Create the DeckLink screen preview helper
       if (CoCreateInstance(CLSID_CDeckLinkGLScreenPreviewHelper, NULL, CLSCTX_ALL, IID_IDeckLinkGLScreenPreviewHelper (void**)&amp;m_deckLinkScreenPreviewHelper) != S_OK)
       return false;

       // Initialise OpenGL
       return initOpenGL();
    }

    bool        PreviewWindow::initOpenGL()
    {
       PIXELFORMATDESCRIPTOR   pixelFormatDesc;
       int                     pixelFormat;
       bool                    result = false;

       //
       // Here, we create an OpenGL context attached to the screen preview box
       // so we can use it later on when we need to draw preview frames.

       // Get the preview box drawing context
       m_previewBoxDC = m_previewBox->GetDC();
       if (m_previewBoxDC == NULL)
           return false;

       // Ensure the preview box DC uses ARGB pixel format
       ZeroMemory(&amp;pixelFormatDesc, sizeof(pixelFormatDesc));
       pixelFormatDesc.nSize = sizeof(pixelFormatDesc);
       pixelFormatDesc.nVersion = 1;
       pixelFormatDesc.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
       pixelFormatDesc.iPixelType = PFD_TYPE_RGBA;
       pixelFormatDesc.cColorBits = 32;
       pixelFormatDesc.cDepthBits = 16;
       pixelFormatDesc.cAlphaBits = 8;
       pixelFormatDesc.iLayerType = PFD_MAIN_PLANE;
       pixelFormat = ChoosePixelFormat(m_previewBoxDC->m_hDC, &amp;pixelFormatDesc);
       if (SetPixelFormat(m_previewBoxDC->m_hDC, pixelFormat, &amp;pixelFormatDesc) == false)
           return false;

       // Create OpenGL rendering context
       m_openGLctx = wglCreateContext(m_previewBoxDC->m_hDC);
       if (m_openGLctx == NULL)
           return false;
       // Make the new OpenGL context the current rendering context so
       // we can initialise the DeckLink preview helper
       if (wglMakeCurrent(m_previewBoxDC->m_hDC, m_openGLctx) == FALSE)
           return false;

       if (m_deckLinkScreenPreviewHelper->InitializeGL() == S_OK)
           result = true;

       // Reset the OpenGL rendering context
       wglMakeCurrent(NULL, NULL);

       return result;
    }

    HRESULT         PreviewWindow::DrawFrame(IDeckLinkVideoFrame* theFrame)
    {
       // Make sure we are initialised
       if ((m_deckLinkScreenPreviewHelper == NULL) || (m_previewBoxDC == NULL) || (m_openGLctx == NULL))
           return E_FAIL;

       // First, pass the frame to the DeckLink screen preview helper
       m_deckLinkScreenPreviewHelper->SetFrame(theFrame);

       // Then set the OpenGL rendering context to the one we created before
       wglMakeCurrent(m_previewBoxDC->m_hDC, m_openGLctx);

       // and let the helper take care of the drawing
       m_deckLinkScreenPreviewHelper->PaintGL();

       // Last, reset the OpenGL rendering context
       wglMakeCurrent(NULL, NULL);

       return S_OK;
    }
  • How AVCodecContext bitrate, framerate and timebase is used when encoding single frame

    28 mars 2023, par Cyrus

    I am trying to learn FFmpeg from examples as there is a tight schedule. The task is to encode a raw YUV image into JPEG format of the given width and height. I have found examples from ffmpeg official website, which turns out to be quite straight-forward. However there are some fields in AVCodecContext that I thought only makes sense when encoding videos(e.g. bitrate, framerate, timebase, gopsize, max_b_frames etc).

    &#xA;

    I understand on a high level what those values are when it comes to videos, but do I need to care about those when I just want a single image ? Currently for testing, I am just setting them as dummy values and it seems to work. But I want to make sure that I am not making terrible assumptions that will break in the long run.

    &#xA;

    EDIT :

    &#xA;

    Here is the code I got. Most of them are copy and paste from examples, with some changes to replace old APIs with newer ones.

    &#xA;

    #include "thumbnail.h"&#xA;#include "libavcodec/avcodec.h"&#xA;#include "libavutil/imgutils.h"&#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;void print_averror(int error_code) {&#xA;    char err_msg[100] = {0};&#xA;    av_strerror(error_code, err_msg, 100);&#xA;    printf("Reason: %s\n", err_msg);&#xA;}&#xA;&#xA;ffmpeg_status_t save_yuv_as_jpeg(uint8_t* source_buffer, char* output_thumbnail_filename, int thumbnail_width, int thumbnail_height) {&#xA;    const AVCodec* mjpeg_codec = avcodec_find_encoder(AV_CODEC_ID_MJPEG);&#xA;    if (!mjpeg_codec) {&#xA;        printf("Codec for mjpeg cannot be found.\n");&#xA;        return FFMPEG_THUMBNAIL_CODEC_NOT_FOUND;&#xA;    }&#xA;&#xA;    AVCodecContext* codec_ctx = avcodec_alloc_context3(mjpeg_codec);&#xA;    if (!codec_ctx) {&#xA;        printf("Codec context cannot be allocated for the given mjpeg codec.\n");&#xA;        return FFMPEG_THUMBNAIL_ALLOC_CONTEXT_FAILED;&#xA;    }&#xA;&#xA;    AVPacket* pkt = av_packet_alloc();&#xA;    if (!pkt) {&#xA;        printf("Thumbnail packet cannot be allocated.\n");&#xA;        return FFMPEG_THUMBNAIL_ALLOC_PACKET_FAILED;&#xA;    }&#xA;&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        printf("Thumbnail frame cannot be allocated.\n");&#xA;        return FFMPEG_THUMBNAIL_ALLOC_FRAME_FAILED;&#xA;    }&#xA;&#xA;    // The part that I don&#x27;t understand&#xA;    codec_ctx->bit_rate = 400000;&#xA;    codec_ctx->width = thumbnail_width;&#xA;    codec_ctx->height = thumbnail_height;&#xA;    codec_ctx->time_base = (AVRational){1, 25};&#xA;    codec_ctx->framerate = (AVRational){1, 25};&#xA;&#xA;    codec_ctx->gop_size = 10;&#xA;    codec_ctx->max_b_frames = 1;&#xA;    codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    int ret = av_image_fill_arrays(frame->data, frame->linesize, source_buffer, AV_PIX_FMT_YUV420P, thumbnail_width, thumbnail_height, 32);&#xA;    if (ret &lt; 0) {&#xA;        print_averror(ret);&#xA;        printf("Pixel format: yuv420p, width: %d, height: %d\n", thumbnail_width, thumbnail_height);&#xA;        return FFMPEG_THUMBNAIL_FILL_FRAME_DATA_FAILED;&#xA;    }&#xA;&#xA;    ret = avcodec_send_frame(codec_ctx, frame);&#xA;    if (ret &lt; 0) {&#xA;        print_averror(ret);&#xA;        printf("Failed to send frame to encoder.\n");&#xA;        return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;&#xA;    }&#xA;&#xA;    ret = avcodec_receive_packet(codec_ctx, pkt);&#xA;    if (ret &lt; 0) {&#xA;        print_averror(ret);&#xA;        printf("Failed to receive packet from encoder.\n");&#xA;        return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;&#xA;    }&#xA;&#xA;    // store the thumbnail in output&#xA;    int fd = open(output_thumbnail_filename, O_CREAT | O_RDWR);&#xA;    write(fd, pkt->data, pkt->size);&#xA;    close(fd);&#xA;&#xA;    // freeing allocated structs&#xA;    avcodec_free_context(&amp;codec_ctx);&#xA;    av_frame_free(&amp;frame);&#xA;    av_packet_free(&amp;pkt);&#xA;    return FFMPEG_SUCCESS;&#xA;}&#xA;

    &#xA;